Test Report: Docker_Linux_crio_arm64 21683

                    
                      ec1ad263eb9d75fb579dc5b6c2680f618af3e384:2025-10-09:41836
                    
                

Test fail (42/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.32
35 TestAddons/parallel/Registry 15.36
36 TestAddons/parallel/RegistryCreds 0.51
37 TestAddons/parallel/Ingress 142.75
38 TestAddons/parallel/InspektorGadget 6.26
39 TestAddons/parallel/MetricsServer 5.38
41 TestAddons/parallel/CSI 55.64
42 TestAddons/parallel/Headlamp 3.14
43 TestAddons/parallel/CloudSpanner 6.32
44 TestAddons/parallel/LocalPath 8.54
45 TestAddons/parallel/NvidiaDevicePlugin 6.3
46 TestAddons/parallel/Yakd 5.31
52 TestForceSystemdFlag 515.98
53 TestForceSystemdEnv 511.84
98 TestFunctional/parallel/ServiceCmdConnect 603.57
117 TestFunctional/parallel/ImageCommands/ImageListShort 2.3
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.21
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.18
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.37
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.23
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.51
144 TestFunctional/parallel/ServiceCmd/DeployApp 601.01
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
154 TestFunctional/parallel/ServiceCmd/Format 0.53
155 TestFunctional/parallel/ServiceCmd/URL 0.58
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 535.17
175 TestMultiControlPlane/serial/DeleteSecondaryNode 9.16
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 3.21
191 TestJSONOutput/pause/Command 2.52
197 TestJSONOutput/unpause/Command 1.83
281 TestPause/serial/Pause 6.92
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.59
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.77
308 TestStartStop/group/old-k8s-version/serial/Pause 7.66
316 TestStartStop/group/no-preload/serial/Pause 6.99
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.27
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.74
330 TestStartStop/group/embed-certs/serial/Pause 6.17
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.74
345 TestStartStop/group/newest-cni/serial/Pause 7.51
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.99
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-999657 addons disable volcano --alsologtostderr -v=1: exit status 11 (315.676863ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:04:14.012484  302749 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:04:14.013348  302749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:14.013398  302749 out.go:374] Setting ErrFile to fd 2...
	I1009 19:04:14.013419  302749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:14.013734  302749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:04:14.014092  302749 mustload.go:65] Loading cluster: addons-999657
	I1009 19:04:14.014529  302749 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:14.014575  302749 addons.go:606] checking whether the cluster is paused
	I1009 19:04:14.014709  302749 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:14.014758  302749 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:04:14.015254  302749 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:04:14.038309  302749 ssh_runner.go:195] Run: systemctl --version
	I1009 19:04:14.038386  302749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:04:14.057053  302749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:04:14.164257  302749 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:04:14.164355  302749 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:04:14.203674  302749 cri.go:89] found id: "50e1747ecacea77a5f93e1d46aa99c4cb1fbce08f8f2f546154db1f9be02c796"
	I1009 19:04:14.203696  302749 cri.go:89] found id: "3f0053d1e02ad19ca3e128caaadebe9c4c0975ba7cb365a25e3c6d52870a17f0"
	I1009 19:04:14.203701  302749 cri.go:89] found id: "4e9a584f93742f2382e823963bed9b224f3ccba7c95660eb1dbca3c6b9908b3f"
	I1009 19:04:14.203705  302749 cri.go:89] found id: "f2087bf38944fc739afaac1113f222793a13154e659f060c4cdece3a7fa73071"
	I1009 19:04:14.203708  302749 cri.go:89] found id: "93ca74439d1e37925a704ede2fce384f9f9489c7b96ead6d63884128a4d9b0d1"
	I1009 19:04:14.203711  302749 cri.go:89] found id: "4011ef25cebccd0072dd10b711fedb8be54cb74589db33f0e4a5e667873eed44"
	I1009 19:04:14.203715  302749 cri.go:89] found id: "859a72eb5676e02ccdbfc8116afe2d9c4f2283fc97f6e130eb77ba45fe1f2ddf"
	I1009 19:04:14.203718  302749 cri.go:89] found id: "bb893c39a97db27f01be58b5eec66390173c64aa6dbf5fcc501e526bd34e4f74"
	I1009 19:04:14.203721  302749 cri.go:89] found id: "a9b5e7a178bf7423b4d23385f8409bd2da8f1ec9e312f7a1c786a7b9f1ec78fe"
	I1009 19:04:14.203726  302749 cri.go:89] found id: "cdcd01c9f8f4271bde354d676a0d7b97cf89b90bcce19fbab3de17f21aebb44c"
	I1009 19:04:14.203730  302749 cri.go:89] found id: "39a52fb8859c2040cedab3dbdc0662ae79f7d3abba463258d2c504bf8830448b"
	I1009 19:04:14.203732  302749 cri.go:89] found id: "7b7dc9732ce4b2127334e1e0c5b92a0ae3fbb0d316e98281fb8f8e8269c4b998"
	I1009 19:04:14.203736  302749 cri.go:89] found id: "ec4db71d717ddecd989934884e42ed0846d635333858d9d800181bfa0530c564"
	I1009 19:04:14.203739  302749 cri.go:89] found id: "f7e6d7b389c66f70de1f9dfa7a02589e922c296513ed0a3835867069f4fa9db8"
	I1009 19:04:14.203744  302749 cri.go:89] found id: "fbc396505d84e35ea37081e17222da9738c8ff9edd4bf2e014fdb1bf99f6de56"
	I1009 19:04:14.203749  302749 cri.go:89] found id: "c8fc026ca1019d8a0f4406f6cc4f8f68a03b36d38c792e26f09d7d78bf7ea9e3"
	I1009 19:04:14.203752  302749 cri.go:89] found id: "2823efa103e5ee38b792c909eaeee0c995e8a8302f5b0f522f6d786b3be0e7ba"
	I1009 19:04:14.203755  302749 cri.go:89] found id: "532259f4c5926820e3e18f689f80a1bc102631a6a0a05374223820ef91ec414f"
	I1009 19:04:14.203758  302749 cri.go:89] found id: "d85964586435683cdf29db9f8d8e0fd3637c91ff01ef302bb910a1397cf75b01"
	I1009 19:04:14.203761  302749 cri.go:89] found id: "7fcbf1be4bdef0161b5efca4fb661fd9a8fddc41f80f6974b49d4a8bb8d17634"
	I1009 19:04:14.203765  302749 cri.go:89] found id: "09a19318421aec51d9c6d371040aa4863198795c9d616ed4df00c26edc18b036"
	I1009 19:04:14.203768  302749 cri.go:89] found id: "aaa0ded06ea4b321c4f9a079d4cf69d526ba351445ed008be4734d67b7ea8524"
	I1009 19:04:14.203771  302749 cri.go:89] found id: "804d5a04697a7b5835636d98ba88b94561d9699443a8eadbbe90fd28d0b160cb"
	I1009 19:04:14.203774  302749 cri.go:89] found id: ""
	I1009 19:04:14.203822  302749 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:04:14.219391  302749 out.go:203] 
	W1009 19:04:14.222267  302749 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:04:14.222287  302749 out.go:285] * 
	* 
	W1009 19:04:14.238089  302749 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:04:14.241163  302749 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-999657 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.078152ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-d8jgl" [d398f51d-a918-4ce0-89c9-47064bd1ae01] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003706713s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-q9p6k" [ae33fd9b-bb98-4d1b-9150-ab438ca12680] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003509092s
addons_test.go:392: (dbg) Run:  kubectl --context addons-999657 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-999657 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-999657 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.642286813s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-999657 addons disable registry --alsologtostderr -v=1: exit status 11 (374.375459ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:04:39.604363  303735 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:04:39.605350  303735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:39.605397  303735 out.go:374] Setting ErrFile to fd 2...
	I1009 19:04:39.605418  303735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:39.605716  303735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:04:39.606052  303735 mustload.go:65] Loading cluster: addons-999657
	I1009 19:04:39.606467  303735 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:39.606509  303735 addons.go:606] checking whether the cluster is paused
	I1009 19:04:39.606632  303735 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:39.606665  303735 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:04:39.607220  303735 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:04:39.634371  303735 ssh_runner.go:195] Run: systemctl --version
	I1009 19:04:39.634421  303735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:04:39.664954  303735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:04:39.793496  303735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:04:39.793601  303735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:04:39.862095  303735 cri.go:89] found id: "50e1747ecacea77a5f93e1d46aa99c4cb1fbce08f8f2f546154db1f9be02c796"
	I1009 19:04:39.862118  303735 cri.go:89] found id: "3f0053d1e02ad19ca3e128caaadebe9c4c0975ba7cb365a25e3c6d52870a17f0"
	I1009 19:04:39.862123  303735 cri.go:89] found id: "4e9a584f93742f2382e823963bed9b224f3ccba7c95660eb1dbca3c6b9908b3f"
	I1009 19:04:39.862127  303735 cri.go:89] found id: "f2087bf38944fc739afaac1113f222793a13154e659f060c4cdece3a7fa73071"
	I1009 19:04:39.862130  303735 cri.go:89] found id: "93ca74439d1e37925a704ede2fce384f9f9489c7b96ead6d63884128a4d9b0d1"
	I1009 19:04:39.862139  303735 cri.go:89] found id: "4011ef25cebccd0072dd10b711fedb8be54cb74589db33f0e4a5e667873eed44"
	I1009 19:04:39.862143  303735 cri.go:89] found id: "859a72eb5676e02ccdbfc8116afe2d9c4f2283fc97f6e130eb77ba45fe1f2ddf"
	I1009 19:04:39.862146  303735 cri.go:89] found id: "bb893c39a97db27f01be58b5eec66390173c64aa6dbf5fcc501e526bd34e4f74"
	I1009 19:04:39.862149  303735 cri.go:89] found id: "a9b5e7a178bf7423b4d23385f8409bd2da8f1ec9e312f7a1c786a7b9f1ec78fe"
	I1009 19:04:39.862155  303735 cri.go:89] found id: "cdcd01c9f8f4271bde354d676a0d7b97cf89b90bcce19fbab3de17f21aebb44c"
	I1009 19:04:39.862158  303735 cri.go:89] found id: "39a52fb8859c2040cedab3dbdc0662ae79f7d3abba463258d2c504bf8830448b"
	I1009 19:04:39.862161  303735 cri.go:89] found id: "7b7dc9732ce4b2127334e1e0c5b92a0ae3fbb0d316e98281fb8f8e8269c4b998"
	I1009 19:04:39.862164  303735 cri.go:89] found id: "ec4db71d717ddecd989934884e42ed0846d635333858d9d800181bfa0530c564"
	I1009 19:04:39.862167  303735 cri.go:89] found id: "f7e6d7b389c66f70de1f9dfa7a02589e922c296513ed0a3835867069f4fa9db8"
	I1009 19:04:39.862171  303735 cri.go:89] found id: "fbc396505d84e35ea37081e17222da9738c8ff9edd4bf2e014fdb1bf99f6de56"
	I1009 19:04:39.862175  303735 cri.go:89] found id: "c8fc026ca1019d8a0f4406f6cc4f8f68a03b36d38c792e26f09d7d78bf7ea9e3"
	I1009 19:04:39.862178  303735 cri.go:89] found id: "2823efa103e5ee38b792c909eaeee0c995e8a8302f5b0f522f6d786b3be0e7ba"
	I1009 19:04:39.862182  303735 cri.go:89] found id: "532259f4c5926820e3e18f689f80a1bc102631a6a0a05374223820ef91ec414f"
	I1009 19:04:39.862185  303735 cri.go:89] found id: "d85964586435683cdf29db9f8d8e0fd3637c91ff01ef302bb910a1397cf75b01"
	I1009 19:04:39.862188  303735 cri.go:89] found id: "7fcbf1be4bdef0161b5efca4fb661fd9a8fddc41f80f6974b49d4a8bb8d17634"
	I1009 19:04:39.862193  303735 cri.go:89] found id: "09a19318421aec51d9c6d371040aa4863198795c9d616ed4df00c26edc18b036"
	I1009 19:04:39.862196  303735 cri.go:89] found id: "aaa0ded06ea4b321c4f9a079d4cf69d526ba351445ed008be4734d67b7ea8524"
	I1009 19:04:39.862199  303735 cri.go:89] found id: "804d5a04697a7b5835636d98ba88b94561d9699443a8eadbbe90fd28d0b160cb"
	I1009 19:04:39.862202  303735 cri.go:89] found id: ""
	I1009 19:04:39.862256  303735 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:04:39.884265  303735 out.go:203] 
	W1009 19:04:39.887011  303735 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:04:39.887049  303735 out.go:285] * 
	* 
	W1009 19:04:39.892216  303735 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:04:39.895227  303735 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-999657 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.36s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.536633ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-999657
addons_test.go:332: (dbg) Run:  kubectl --context addons-999657 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-999657 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (265.183666ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:05:50.014010  305426 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:05:50.014890  305426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:05:50.014910  305426 out.go:374] Setting ErrFile to fd 2...
	I1009 19:05:50.014917  305426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:05:50.015265  305426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:05:50.015628  305426 mustload.go:65] Loading cluster: addons-999657
	I1009 19:05:50.016161  305426 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:05:50.016186  305426 addons.go:606] checking whether the cluster is paused
	I1009 19:05:50.016334  305426 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:05:50.016352  305426 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:05:50.016847  305426 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:05:50.048641  305426 ssh_runner.go:195] Run: systemctl --version
	I1009 19:05:50.048706  305426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:05:50.067762  305426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:05:50.171906  305426 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:05:50.171994  305426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:05:50.204417  305426 cri.go:89] found id: "50e1747ecacea77a5f93e1d46aa99c4cb1fbce08f8f2f546154db1f9be02c796"
	I1009 19:05:50.204437  305426 cri.go:89] found id: "3f0053d1e02ad19ca3e128caaadebe9c4c0975ba7cb365a25e3c6d52870a17f0"
	I1009 19:05:50.204443  305426 cri.go:89] found id: "4e9a584f93742f2382e823963bed9b224f3ccba7c95660eb1dbca3c6b9908b3f"
	I1009 19:05:50.204447  305426 cri.go:89] found id: "f2087bf38944fc739afaac1113f222793a13154e659f060c4cdece3a7fa73071"
	I1009 19:05:50.204452  305426 cri.go:89] found id: "93ca74439d1e37925a704ede2fce384f9f9489c7b96ead6d63884128a4d9b0d1"
	I1009 19:05:50.204457  305426 cri.go:89] found id: "4011ef25cebccd0072dd10b711fedb8be54cb74589db33f0e4a5e667873eed44"
	I1009 19:05:50.204460  305426 cri.go:89] found id: "859a72eb5676e02ccdbfc8116afe2d9c4f2283fc97f6e130eb77ba45fe1f2ddf"
	I1009 19:05:50.204464  305426 cri.go:89] found id: "bb893c39a97db27f01be58b5eec66390173c64aa6dbf5fcc501e526bd34e4f74"
	I1009 19:05:50.204467  305426 cri.go:89] found id: "a9b5e7a178bf7423b4d23385f8409bd2da8f1ec9e312f7a1c786a7b9f1ec78fe"
	I1009 19:05:50.204473  305426 cri.go:89] found id: "cdcd01c9f8f4271bde354d676a0d7b97cf89b90bcce19fbab3de17f21aebb44c"
	I1009 19:05:50.204477  305426 cri.go:89] found id: "39a52fb8859c2040cedab3dbdc0662ae79f7d3abba463258d2c504bf8830448b"
	I1009 19:05:50.204480  305426 cri.go:89] found id: "7b7dc9732ce4b2127334e1e0c5b92a0ae3fbb0d316e98281fb8f8e8269c4b998"
	I1009 19:05:50.204483  305426 cri.go:89] found id: "ec4db71d717ddecd989934884e42ed0846d635333858d9d800181bfa0530c564"
	I1009 19:05:50.204486  305426 cri.go:89] found id: "f7e6d7b389c66f70de1f9dfa7a02589e922c296513ed0a3835867069f4fa9db8"
	I1009 19:05:50.204490  305426 cri.go:89] found id: "fbc396505d84e35ea37081e17222da9738c8ff9edd4bf2e014fdb1bf99f6de56"
	I1009 19:05:50.204499  305426 cri.go:89] found id: "c8fc026ca1019d8a0f4406f6cc4f8f68a03b36d38c792e26f09d7d78bf7ea9e3"
	I1009 19:05:50.204502  305426 cri.go:89] found id: "2823efa103e5ee38b792c909eaeee0c995e8a8302f5b0f522f6d786b3be0e7ba"
	I1009 19:05:50.204507  305426 cri.go:89] found id: "532259f4c5926820e3e18f689f80a1bc102631a6a0a05374223820ef91ec414f"
	I1009 19:05:50.204511  305426 cri.go:89] found id: "d85964586435683cdf29db9f8d8e0fd3637c91ff01ef302bb910a1397cf75b01"
	I1009 19:05:50.204514  305426 cri.go:89] found id: "7fcbf1be4bdef0161b5efca4fb661fd9a8fddc41f80f6974b49d4a8bb8d17634"
	I1009 19:05:50.204518  305426 cri.go:89] found id: "09a19318421aec51d9c6d371040aa4863198795c9d616ed4df00c26edc18b036"
	I1009 19:05:50.204521  305426 cri.go:89] found id: "aaa0ded06ea4b321c4f9a079d4cf69d526ba351445ed008be4734d67b7ea8524"
	I1009 19:05:50.204529  305426 cri.go:89] found id: "804d5a04697a7b5835636d98ba88b94561d9699443a8eadbbe90fd28d0b160cb"
	I1009 19:05:50.204532  305426 cri.go:89] found id: ""
	I1009 19:05:50.204589  305426 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:05:50.218659  305426 out.go:203] 
	W1009 19:05:50.220326  305426 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:05:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:05:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:05:50.220350  305426 out.go:285] * 
	* 
	W1009 19:05:50.225425  305426 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:05:50.226877  305426 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-999657 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (142.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-999657 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-999657 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-999657 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [9b9915bf-a8cd-40bc-97eb-304d989c46a0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [9b9915bf-a8cd-40bc-97eb-304d989c46a0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004440101s
I1009 19:05:00.401420  296002 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-999657 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.643312951s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-999657 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-999657
helpers_test.go:243: (dbg) docker inspect addons-999657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd",
	        "Created": "2025-10-09T19:01:42.773639389Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 297173,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:01:42.832045963Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd/hostname",
	        "HostsPath": "/var/lib/docker/containers/ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd/hosts",
	        "LogPath": "/var/lib/docker/containers/ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd/ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd-json.log",
	        "Name": "/addons-999657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-999657:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-999657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd",
	                "LowerDir": "/var/lib/docker/overlay2/38454846971f2b21cec936743dc4c4192a2e913d6fb39fa2ee1d6c41b9b691b6-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38454846971f2b21cec936743dc4c4192a2e913d6fb39fa2ee1d6c41b9b691b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38454846971f2b21cec936743dc4c4192a2e913d6fb39fa2ee1d6c41b9b691b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38454846971f2b21cec936743dc4c4192a2e913d6fb39fa2ee1d6c41b9b691b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-999657",
	                "Source": "/var/lib/docker/volumes/addons-999657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-999657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-999657",
	                "name.minikube.sigs.k8s.io": "addons-999657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "753810abaf007ac4f831309901d634c334dccb43ce0143ff6439762a6a39d5a8",
	            "SandboxKey": "/var/run/docker/netns/753810abaf00",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-999657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:f5:bf:e8:96:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2c1c6236327c66b1abe1475e3f979bdb96192bd80d34b9b787ee03064ac7e95d",
	                    "EndpointID": "dcfa7f8b7c67063e84bdcfacc89b13f37a8ba2fd94f08195d90a8cbda63543e1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-999657",
	                        "ecd6cd18f751"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-999657 -n addons-999657
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-999657 logs -n 25: (1.570686868s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-847696                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-847696 │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │ 09 Oct 25 19:01 UTC │
	│ start   │ --download-only -p binary-mirror-719553 --alsologtostderr --binary-mirror http://127.0.0.1:39775 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-719553   │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ delete  │ -p binary-mirror-719553                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-719553   │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │ 09 Oct 25 19:01 UTC │
	│ addons  │ enable dashboard -p addons-999657                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ addons  │ disable dashboard -p addons-999657                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ start   │ -p addons-999657 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │ 09 Oct 25 19:04 UTC │
	│ addons  │ addons-999657 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	│ addons  │ addons-999657 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	│ addons  │ enable headlamp -p addons-999657 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	│ addons  │ addons-999657 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	│ addons  │ addons-999657 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	│ addons  │ addons-999657 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	│ ip      │ addons-999657 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │ 09 Oct 25 19:04 UTC │
	│ addons  │ addons-999657 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	│ addons  │ addons-999657 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	│ ssh     │ addons-999657 ssh cat /opt/local-path-provisioner/pvc-043c2597-2dd6-45b6-98a9-80ebf890bc70_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │ 09 Oct 25 19:04 UTC │
	│ addons  │ addons-999657 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	│ addons  │ addons-999657 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	│ ssh     │ addons-999657 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:05 UTC │                     │
	│ addons  │ addons-999657 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:05 UTC │                     │
	│ addons  │ addons-999657 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:05 UTC │                     │
	│ addons  │ addons-999657 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:05 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-999657                                                                                                                                                                                                                                                                                                                                                                                           │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:05 UTC │ 09 Oct 25 19:05 UTC │
	│ addons  │ addons-999657 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:05 UTC │                     │
	│ ip      │ addons-999657 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:07 UTC │ 09 Oct 25 19:07 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:01:16
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:01:16.679364  296772 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:01:16.679508  296772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:01:16.679519  296772 out.go:374] Setting ErrFile to fd 2...
	I1009 19:01:16.679525  296772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:01:16.679777  296772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:01:16.680280  296772 out.go:368] Setting JSON to false
	I1009 19:01:16.681140  296772 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6216,"bootTime":1760030261,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 19:01:16.681207  296772 start.go:143] virtualization:  
	I1009 19:01:16.682707  296772 out.go:179] * [addons-999657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:01:16.684182  296772 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:01:16.684277  296772 notify.go:221] Checking for updates...
	I1009 19:01:16.686820  296772 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:01:16.688293  296772 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:01:16.689379  296772 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 19:01:16.690543  296772 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:01:16.691682  296772 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:01:16.693032  296772 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:01:16.714648  296772 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:01:16.714767  296772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:01:16.776724  296772 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-09 19:01:16.767180971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:01:16.776839  296772 docker.go:319] overlay module found
	I1009 19:01:16.778333  296772 out.go:179] * Using the docker driver based on user configuration
	I1009 19:01:16.779456  296772 start.go:309] selected driver: docker
	I1009 19:01:16.779483  296772 start.go:930] validating driver "docker" against <nil>
	I1009 19:01:16.779498  296772 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:01:16.780233  296772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:01:16.833715  296772 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-09 19:01:16.824873484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:01:16.833874  296772 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 19:01:16.834098  296772 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:01:16.835447  296772 out.go:179] * Using Docker driver with root privileges
	I1009 19:01:16.836619  296772 cni.go:84] Creating CNI manager for ""
	I1009 19:01:16.836694  296772 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:01:16.836709  296772 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:01:16.836782  296772 start.go:353] cluster config:
	{Name:addons-999657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-999657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1009 19:01:16.838936  296772 out.go:179] * Starting "addons-999657" primary control-plane node in "addons-999657" cluster
	I1009 19:01:16.840110  296772 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:01:16.841621  296772 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:01:16.842917  296772 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:01:16.842946  296772 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:01:16.842977  296772 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:01:16.842987  296772 cache.go:58] Caching tarball of preloaded images
	I1009 19:01:16.843080  296772 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:01:16.843090  296772 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:01:16.843412  296772 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/config.json ...
	I1009 19:01:16.843441  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/config.json: {Name:mk995129adb1de29ffda6c1745cc80de4b941c08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:16.858617  296772 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 19:01:16.858757  296772 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1009 19:01:16.858778  296772 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1009 19:01:16.858783  296772 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1009 19:01:16.858790  296772 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1009 19:01:16.858795  296772 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from local cache
	I1009 19:01:34.886055  296772 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from cached tarball
	I1009 19:01:34.886102  296772 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:01:34.886132  296772 start.go:361] acquireMachinesLock for addons-999657: {Name:mk16a18698d56f1afca86a28d3906fc672e3afb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:01:34.886263  296772 start.go:365] duration metric: took 109.17µs to acquireMachinesLock for "addons-999657"
	I1009 19:01:34.886298  296772 start.go:94] Provisioning new machine with config: &{Name:addons-999657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-999657 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:01:34.886389  296772 start.go:126] createHost starting for "" (driver="docker")
	I1009 19:01:34.888004  296772 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1009 19:01:34.888251  296772 start.go:160] libmachine.API.Create for "addons-999657" (driver="docker")
	I1009 19:01:34.888299  296772 client.go:168] LocalClient.Create starting
	I1009 19:01:34.888423  296772 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem
	I1009 19:01:35.335047  296772 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem
	I1009 19:01:36.140376  296772 cli_runner.go:164] Run: docker network inspect addons-999657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:01:36.156680  296772 cli_runner.go:211] docker network inspect addons-999657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:01:36.156766  296772 network_create.go:284] running [docker network inspect addons-999657] to gather additional debugging logs...
	I1009 19:01:36.156788  296772 cli_runner.go:164] Run: docker network inspect addons-999657
	W1009 19:01:36.173149  296772 cli_runner.go:211] docker network inspect addons-999657 returned with exit code 1
	I1009 19:01:36.173182  296772 network_create.go:287] error running [docker network inspect addons-999657]: docker network inspect addons-999657: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-999657 not found
	I1009 19:01:36.173196  296772 network_create.go:289] output of [docker network inspect addons-999657]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-999657 not found
	
	** /stderr **
	I1009 19:01:36.173311  296772 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:01:36.191160  296772 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001965820}
	I1009 19:01:36.191200  296772 network_create.go:124] attempt to create docker network addons-999657 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 19:01:36.191256  296772 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-999657 addons-999657
	I1009 19:01:36.251942  296772 network_create.go:108] docker network addons-999657 192.168.49.0/24 created
	I1009 19:01:36.251977  296772 kic.go:121] calculated static IP "192.168.49.2" for the "addons-999657" container
	I1009 19:01:36.252057  296772 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:01:36.267520  296772 cli_runner.go:164] Run: docker volume create addons-999657 --label name.minikube.sigs.k8s.io=addons-999657 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:01:36.284809  296772 oci.go:103] Successfully created a docker volume addons-999657
	I1009 19:01:36.284907  296772 cli_runner.go:164] Run: docker run --rm --name addons-999657-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-999657 --entrypoint /usr/bin/test -v addons-999657:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:01:38.221698  296772 cli_runner.go:217] Completed: docker run --rm --name addons-999657-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-999657 --entrypoint /usr/bin/test -v addons-999657:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (1.936738061s)
	I1009 19:01:38.221731  296772 oci.go:107] Successfully prepared a docker volume addons-999657
	I1009 19:01:38.221765  296772 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:01:38.221787  296772 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:01:38.221864  296772 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-999657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:01:42.698005  296772 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-999657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.476095654s)
	I1009 19:01:42.698038  296772 kic.go:203] duration metric: took 4.476246125s to extract preloaded images to volume ...
	W1009 19:01:42.698182  296772 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 19:01:42.698298  296772 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:01:42.758663  296772 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-999657 --name addons-999657 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-999657 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-999657 --network addons-999657 --ip 192.168.49.2 --volume addons-999657:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:01:43.048107  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Running}}
	I1009 19:01:43.072696  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:01:43.095687  296772 cli_runner.go:164] Run: docker exec addons-999657 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:01:43.153649  296772 oci.go:144] the created container "addons-999657" has a running status.
	I1009 19:01:43.153678  296772 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa...
	I1009 19:01:43.873086  296772 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:01:43.893017  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:01:43.910063  296772 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:01:43.910084  296772 kic_runner.go:114] Args: [docker exec --privileged addons-999657 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:01:43.949748  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:01:43.968096  296772 machine.go:93] provisionDockerMachine start ...
	I1009 19:01:43.968204  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:43.986422  296772 main.go:141] libmachine: Using SSH client type: native
	I1009 19:01:43.986760  296772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1009 19:01:43.986776  296772 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:01:43.987437  296772 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44722->127.0.0.1:33139: read: connection reset by peer
	I1009 19:01:47.132739  296772 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-999657
	
	I1009 19:01:47.132763  296772 ubuntu.go:182] provisioning hostname "addons-999657"
	I1009 19:01:47.132840  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:47.152468  296772 main.go:141] libmachine: Using SSH client type: native
	I1009 19:01:47.152779  296772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1009 19:01:47.152795  296772 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-999657 && echo "addons-999657" | sudo tee /etc/hostname
	I1009 19:01:47.316441  296772 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-999657
	
	I1009 19:01:47.316531  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:47.336445  296772 main.go:141] libmachine: Using SSH client type: native
	I1009 19:01:47.336760  296772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1009 19:01:47.336782  296772 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-999657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-999657/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-999657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:01:47.481529  296772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:01:47.481557  296772 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 19:01:47.481576  296772 ubuntu.go:190] setting up certificates
	I1009 19:01:47.481593  296772 provision.go:84] configureAuth start
	I1009 19:01:47.481653  296772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-999657
	I1009 19:01:47.505853  296772 provision.go:143] copyHostCerts
	I1009 19:01:47.505948  296772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 19:01:47.506092  296772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 19:01:47.506193  296772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 19:01:47.506269  296772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.addons-999657 san=[127.0.0.1 192.168.49.2 addons-999657 localhost minikube]
	I1009 19:01:47.983830  296772 provision.go:177] copyRemoteCerts
	I1009 19:01:47.983901  296772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:01:47.983943  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:48.002366  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:01:48.109203  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:01:48.128025  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:01:48.146806  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:01:48.164891  296772 provision.go:87] duration metric: took 683.272793ms to configureAuth
	I1009 19:01:48.164918  296772 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:01:48.165185  296772 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:01:48.165295  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:48.182559  296772 main.go:141] libmachine: Using SSH client type: native
	I1009 19:01:48.182863  296772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1009 19:01:48.182882  296772 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:01:48.433417  296772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:01:48.433497  296772 machine.go:96] duration metric: took 4.465372947s to provisionDockerMachine
	I1009 19:01:48.433527  296772 client.go:171] duration metric: took 13.545214098s to LocalClient.Create
	I1009 19:01:48.433590  296772 start.go:168] duration metric: took 13.54533532s to libmachine.API.Create "addons-999657"
	I1009 19:01:48.433626  296772 start.go:294] postStartSetup for "addons-999657" (driver="docker")
	I1009 19:01:48.433665  296772 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:01:48.433798  296772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:01:48.433926  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:48.451913  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:01:48.553255  296772 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:01:48.556509  296772 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:01:48.556538  296772 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:01:48.556550  296772 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 19:01:48.556619  296772 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 19:01:48.556649  296772 start.go:297] duration metric: took 122.990831ms for postStartSetup
	I1009 19:01:48.556958  296772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-999657
	I1009 19:01:48.573496  296772 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/config.json ...
	I1009 19:01:48.573804  296772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:01:48.573857  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:48.591007  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:01:48.690121  296772 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:01:48.695263  296772 start.go:129] duration metric: took 13.808859016s to createHost
	I1009 19:01:48.695312  296772 start.go:84] releasing machines lock for "addons-999657", held for 13.809032804s
	I1009 19:01:48.695423  296772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-999657
	I1009 19:01:48.712595  296772 ssh_runner.go:195] Run: cat /version.json
	I1009 19:01:48.712658  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:48.712922  296772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:01:48.712986  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:48.734738  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:01:48.743309  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:01:48.836802  296772 ssh_runner.go:195] Run: systemctl --version
	I1009 19:01:48.930801  296772 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:01:48.966474  296772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:01:48.970908  296772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:01:48.971040  296772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:01:49.000452  296772 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 19:01:49.000479  296772 start.go:496] detecting cgroup driver to use...
	I1009 19:01:49.000520  296772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:01:49.000573  296772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:01:49.017650  296772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:01:49.030871  296772 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:01:49.030939  296772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:01:49.051643  296772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:01:49.070934  296772 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:01:49.189540  296772 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:01:49.312980  296772 docker.go:234] disabling docker service ...
	I1009 19:01:49.313091  296772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:01:49.335452  296772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:01:49.348961  296772 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:01:49.460352  296772 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:01:49.585135  296772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:01:49.598787  296772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:01:49.613903  296772 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:01:49.613991  296772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:01:49.623874  296772 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:01:49.623968  296772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:01:49.635177  296772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:01:49.644806  296772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:01:49.654839  296772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:01:49.663016  296772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:01:49.672400  296772 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:01:49.686417  296772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:01:49.695597  296772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:01:49.703582  296772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:01:49.711318  296772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:01:49.825534  296772 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:01:49.955214  296772 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:01:49.955336  296772 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:01:49.959908  296772 start.go:564] Will wait 60s for crictl version
	I1009 19:01:49.960004  296772 ssh_runner.go:195] Run: which crictl
	I1009 19:01:49.963836  296772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:01:49.988322  296772 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:01:49.988449  296772 ssh_runner.go:195] Run: crio --version
	I1009 19:01:50.016501  296772 ssh_runner.go:195] Run: crio --version
	I1009 19:01:50.056004  296772 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:01:50.060011  296772 cli_runner.go:164] Run: docker network inspect addons-999657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:01:50.079399  296772 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:01:50.083525  296772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:01:50.094449  296772 kubeadm.go:883] updating cluster {Name:addons-999657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-999657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:01:50.094571  296772 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:01:50.094640  296772 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:01:50.135741  296772 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:01:50.135767  296772 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:01:50.135829  296772 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:01:50.162149  296772 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:01:50.162175  296772 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:01:50.162184  296772 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:01:50.162342  296772 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-999657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-999657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:01:50.162439  296772 ssh_runner.go:195] Run: crio config
	I1009 19:01:50.220557  296772 cni.go:84] Creating CNI manager for ""
	I1009 19:01:50.220582  296772 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:01:50.220607  296772 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:01:50.220665  296772 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-999657 NodeName:addons-999657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:01:50.220871  296772 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-999657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:01:50.220968  296772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:01:50.229125  296772 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:01:50.229216  296772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:01:50.236889  296772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 19:01:50.249608  296772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:01:50.263391  296772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1009 19:01:50.276374  296772 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:01:50.280033  296772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:01:50.290043  296772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:01:50.407012  296772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:01:50.424063  296772 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657 for IP: 192.168.49.2
	I1009 19:01:50.424131  296772 certs.go:195] generating shared ca certs ...
	I1009 19:01:50.424165  296772 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:50.424334  296772 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 19:01:50.607990  296772 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt ...
	I1009 19:01:50.608032  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt: {Name:mk0316901a716eaa5700db6d41b8adda1dc81adc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:50.608286  296772 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key ...
	I1009 19:01:50.608303  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key: {Name:mkccde951df0bb8152ae82f675fcd46af7288b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:50.608399  296772 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 19:01:50.944235  296772 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt ...
	I1009 19:01:50.944267  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt: {Name:mk044a571a6d3d56e00aa1ba715adfac50d1bbb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:50.944453  296772 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key ...
	I1009 19:01:50.944467  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key: {Name:mkff81370b2f76c9e643456d05c4c3484afe318e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:50.944549  296772 certs.go:257] generating profile certs ...
	I1009 19:01:50.944614  296772 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.key
	I1009 19:01:50.944632  296772 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt with IP's: []
	I1009 19:01:51.495445  296772 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt ...
	I1009 19:01:51.495480  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: {Name:mkb2b9db7cec29c19c97e0c0966f111d5bee6c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:51.495673  296772 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.key ...
	I1009 19:01:51.495686  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.key: {Name:mk8b7329e68497b88fd53b32009d329d2b491dab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:51.495771  296772 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.key.efddb3c5
	I1009 19:01:51.495792  296772 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.crt.efddb3c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 19:01:52.514573  296772 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.crt.efddb3c5 ...
	I1009 19:01:52.514607  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.crt.efddb3c5: {Name:mk623515e7a0f073c54954239de7cff11f83ba90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:52.514814  296772 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.key.efddb3c5 ...
	I1009 19:01:52.514841  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.key.efddb3c5: {Name:mkd4657a15eb79b60d4dbac583d3114e18057cc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:52.514936  296772 certs.go:382] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.crt.efddb3c5 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.crt
	I1009 19:01:52.515024  296772 certs.go:386] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.key.efddb3c5 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.key
	I1009 19:01:52.515080  296772 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/proxy-client.key
	I1009 19:01:52.515102  296772 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/proxy-client.crt with IP's: []
	I1009 19:01:52.911939  296772 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/proxy-client.crt ...
	I1009 19:01:52.911971  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/proxy-client.crt: {Name:mk84307c805f583c0c3d20a25774dc0045ed0754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:52.912146  296772 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/proxy-client.key ...
	I1009 19:01:52.912160  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/proxy-client.key: {Name:mk1689535ab330d4a3aed12d8422f75e38ce76ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:52.912363  296772 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:01:52.912405  296772 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:01:52.912436  296772 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:01:52.912464  296772 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 19:01:52.913035  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:01:52.932109  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:01:52.950787  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:01:52.969762  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:01:52.988228  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 19:01:53.006024  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:01:53.023747  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:01:53.044992  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:01:53.064835  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:01:53.083531  296772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:01:53.096404  296772 ssh_runner.go:195] Run: openssl version
	I1009 19:01:53.103129  296772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:01:53.111827  296772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:01:53.115783  296772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:01:53.115853  296772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:01:53.159088  296772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:01:53.167702  296772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:01:53.171297  296772 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:01:53.171349  296772 kubeadm.go:400] StartCluster: {Name:addons-999657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-999657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:01:53.171428  296772 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:01:53.171491  296772 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:01:53.200401  296772 cri.go:89] found id: ""
	I1009 19:01:53.200474  296772 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:01:53.208234  296772 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:01:53.216038  296772 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:01:53.216144  296772 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:01:53.224015  296772 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:01:53.224046  296772 kubeadm.go:157] found existing configuration files:
	
	I1009 19:01:53.224099  296772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:01:53.231763  296772 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:01:53.231831  296772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:01:53.239384  296772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:01:53.247204  296772 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:01:53.247362  296772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:01:53.255034  296772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:01:53.262632  296772 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:01:53.262739  296772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:01:53.270060  296772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:01:53.277688  296772 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:01:53.277802  296772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:01:53.285265  296772 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:01:53.323522  296772 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:01:53.323744  296772 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:01:53.346274  296772 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:01:53.346352  296772 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 19:01:53.346397  296772 kubeadm.go:318] OS: Linux
	I1009 19:01:53.346449  296772 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:01:53.346504  296772 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 19:01:53.346557  296772 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:01:53.346612  296772 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:01:53.346684  296772 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:01:53.346743  296772 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:01:53.346794  296772 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:01:53.346847  296772 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:01:53.346900  296772 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 19:01:53.429314  296772 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:01:53.429456  296772 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:01:53.429556  296772 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:01:53.437921  296772 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:01:53.442201  296772 out.go:252]   - Generating certificates and keys ...
	I1009 19:01:53.442309  296772 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:01:53.442449  296772 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:01:53.706989  296772 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:01:53.930872  296772 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:01:54.281879  296772 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:01:54.540236  296772 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:01:55.186843  296772 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:01:55.186984  296772 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-999657 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:01:57.182473  296772 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:01:57.182778  296772 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-999657 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:01:57.962862  296772 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:01:58.518577  296772 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:01:59.250353  296772 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:01:59.250651  296772 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:02:00.094661  296772 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:02:02.171797  296772 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:02:03.077326  296772 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:02:03.460547  296772 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:02:03.689089  296772 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:02:03.690151  296772 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:02:03.694233  296772 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:02:03.697880  296772 out.go:252]   - Booting up control plane ...
	I1009 19:02:03.697998  296772 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:02:03.698087  296772 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:02:03.699210  296772 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:02:03.715880  296772 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:02:03.715997  296772 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:02:03.723989  296772 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:02:03.724257  296772 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:02:03.724454  296772 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:02:03.855157  296772 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:02:03.855287  296772 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:02:04.365525  296772 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 508.078807ms
	I1009 19:02:04.366829  296772 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:02:04.367057  296772 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:02:04.367165  296772 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:02:04.367258  296772 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:02:08.086551  296772 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.718900243s
	I1009 19:02:10.876673  296772 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.509522896s
	I1009 19:02:11.370887  296772 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.003232749s
	I1009 19:02:11.398297  296772 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 19:02:11.410902  296772 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 19:02:11.427132  296772 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 19:02:11.427358  296772 kubeadm.go:318] [mark-control-plane] Marking the node addons-999657 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 19:02:11.439327  296772 kubeadm.go:318] [bootstrap-token] Using token: diu2ln.o4wtypfu62jwn63h
	I1009 19:02:11.442470  296772 out.go:252]   - Configuring RBAC rules ...
	I1009 19:02:11.442602  296772 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 19:02:11.448346  296772 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 19:02:11.459650  296772 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 19:02:11.463889  296772 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 19:02:11.468295  296772 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 19:02:11.472697  296772 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 19:02:11.778331  296772 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 19:02:12.208024  296772 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 19:02:12.778626  296772 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 19:02:12.778650  296772 kubeadm.go:318] 
	I1009 19:02:12.778715  296772 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 19:02:12.778725  296772 kubeadm.go:318] 
	I1009 19:02:12.778806  296772 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 19:02:12.778816  296772 kubeadm.go:318] 
	I1009 19:02:12.778843  296772 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 19:02:12.778908  296772 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 19:02:12.778965  296772 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 19:02:12.778974  296772 kubeadm.go:318] 
	I1009 19:02:12.779031  296772 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 19:02:12.779039  296772 kubeadm.go:318] 
	I1009 19:02:12.779088  296772 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 19:02:12.779096  296772 kubeadm.go:318] 
	I1009 19:02:12.779150  296772 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 19:02:12.779231  296772 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 19:02:12.779310  296772 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 19:02:12.779320  296772 kubeadm.go:318] 
	I1009 19:02:12.779407  296772 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 19:02:12.779490  296772 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 19:02:12.779510  296772 kubeadm.go:318] 
	I1009 19:02:12.779598  296772 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token diu2ln.o4wtypfu62jwn63h \
	I1009 19:02:12.779709  296772 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e766d16640f098061f552dd476e80ebd3809bd57b4957045222f32c55d34903e \
	I1009 19:02:12.779734  296772 kubeadm.go:318] 	--control-plane 
	I1009 19:02:12.779739  296772 kubeadm.go:318] 
	I1009 19:02:12.779830  296772 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 19:02:12.779840  296772 kubeadm.go:318] 
	I1009 19:02:12.779925  296772 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token diu2ln.o4wtypfu62jwn63h \
	I1009 19:02:12.780035  296772 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e766d16640f098061f552dd476e80ebd3809bd57b4957045222f32c55d34903e 
	I1009 19:02:12.784264  296772 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:02:12.784491  296772 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:02:12.784597  296772 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:02:12.784617  296772 cni.go:84] Creating CNI manager for ""
	I1009 19:02:12.784625  296772 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:02:12.787862  296772 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1009 19:02:12.790845  296772 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 19:02:12.795311  296772 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 19:02:12.795333  296772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 19:02:12.808533  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 19:02:13.108853  296772 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 19:02:13.108975  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:13.109062  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-999657 minikube.k8s.io/updated_at=2025_10_09T19_02_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb minikube.k8s.io/name=addons-999657 minikube.k8s.io/primary=true
	I1009 19:02:13.131923  296772 ops.go:34] apiserver oom_adj: -16
	I1009 19:02:13.255007  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:13.755602  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:14.255388  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:14.755890  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:15.256089  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:15.755104  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:16.255619  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:16.755695  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:16.871691  296772 kubeadm.go:1113] duration metric: took 3.762774782s to wait for elevateKubeSystemPrivileges
	I1009 19:02:16.871721  296772 kubeadm.go:402] duration metric: took 23.700375217s to StartCluster
	I1009 19:02:16.871739  296772 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:16.871857  296772 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:02:16.872247  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:16.872438  296772 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:02:16.872616  296772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 19:02:16.872880  296772 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:16.872913  296772 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1009 19:02:16.872983  296772 addons.go:69] Setting yakd=true in profile "addons-999657"
	I1009 19:02:16.872997  296772 addons.go:238] Setting addon yakd=true in "addons-999657"
	I1009 19:02:16.873019  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.873163  296772 addons.go:69] Setting inspektor-gadget=true in profile "addons-999657"
	I1009 19:02:16.873185  296772 addons.go:238] Setting addon inspektor-gadget=true in "addons-999657"
	I1009 19:02:16.873216  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.873541  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.873662  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.873932  296772 addons.go:69] Setting metrics-server=true in profile "addons-999657"
	I1009 19:02:16.873953  296772 addons.go:238] Setting addon metrics-server=true in "addons-999657"
	I1009 19:02:16.873975  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.874390  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.876497  296772 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-999657"
	I1009 19:02:16.876531  296772 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-999657"
	I1009 19:02:16.876566  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.877026  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.877398  296772 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-999657"
	I1009 19:02:16.877463  296772 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-999657"
	I1009 19:02:16.877615  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.879241  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.882494  296772 addons.go:69] Setting cloud-spanner=true in profile "addons-999657"
	I1009 19:02:16.882537  296772 addons.go:238] Setting addon cloud-spanner=true in "addons-999657"
	I1009 19:02:16.882572  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.883108  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.884515  296772 addons.go:69] Setting registry=true in profile "addons-999657"
	I1009 19:02:16.884544  296772 addons.go:238] Setting addon registry=true in "addons-999657"
	I1009 19:02:16.884581  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.885060  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.889480  296772 addons.go:69] Setting registry-creds=true in profile "addons-999657"
	I1009 19:02:16.889524  296772 addons.go:238] Setting addon registry-creds=true in "addons-999657"
	I1009 19:02:16.889560  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.890052  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.892322  296772 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-999657"
	I1009 19:02:16.892395  296772 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-999657"
	I1009 19:02:16.892427  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.892993  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.898352  296772 addons.go:69] Setting storage-provisioner=true in profile "addons-999657"
	I1009 19:02:16.898398  296772 addons.go:238] Setting addon storage-provisioner=true in "addons-999657"
	I1009 19:02:16.898433  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.898891  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.905254  296772 addons.go:69] Setting default-storageclass=true in profile "addons-999657"
	I1009 19:02:16.905616  296772 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-999657"
	I1009 19:02:16.905984  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.920984  296772 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-999657"
	I1009 19:02:16.921020  296772 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-999657"
	I1009 19:02:16.921528  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.922299  296772 addons.go:69] Setting gcp-auth=true in profile "addons-999657"
	I1009 19:02:16.922332  296772 mustload.go:65] Loading cluster: addons-999657
	I1009 19:02:16.922546  296772 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:16.922808  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.935250  296772 addons.go:69] Setting volcano=true in profile "addons-999657"
	I1009 19:02:16.935284  296772 addons.go:238] Setting addon volcano=true in "addons-999657"
	I1009 19:02:16.935322  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.935792  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.943159  296772 addons.go:69] Setting ingress=true in profile "addons-999657"
	I1009 19:02:16.943196  296772 addons.go:238] Setting addon ingress=true in "addons-999657"
	I1009 19:02:16.943239  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.943729  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.954476  296772 addons.go:69] Setting volumesnapshots=true in profile "addons-999657"
	I1009 19:02:16.954518  296772 addons.go:238] Setting addon volumesnapshots=true in "addons-999657"
	I1009 19:02:16.954553  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.955045  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.955185  296772 addons.go:69] Setting ingress-dns=true in profile "addons-999657"
	I1009 19:02:16.955198  296772 addons.go:238] Setting addon ingress-dns=true in "addons-999657"
	I1009 19:02:16.955224  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.955611  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.969704  296772 out.go:179] * Verifying Kubernetes components...
	I1009 19:02:17.025539  296772 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1009 19:02:17.130068  296772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:02:17.142363  296772 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1009 19:02:17.143917  296772 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1009 19:02:17.146918  296772 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 19:02:17.147023  296772 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 19:02:17.147161  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.160426  296772 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1009 19:02:17.163360  296772 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1009 19:02:17.163390  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1009 19:02:17.163455  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.182109  296772 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-999657"
	I1009 19:02:17.182152  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:17.182744  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:17.146942  296772 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1009 19:02:17.198031  296772 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1009 19:02:17.198209  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.211571  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1009 19:02:17.146958  296772 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1009 19:02:17.212161  296772 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1009 19:02:17.212267  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.197523  296772 addons.go:238] Setting addon default-storageclass=true in "addons-999657"
	I1009 19:02:17.214159  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:17.197544  296772 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1009 19:02:17.197550  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1009 19:02:17.197554  296772 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	W1009 19:02:17.197893  296772 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1009 19:02:17.215043  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:17.238923  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:17.240602  296772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 19:02:17.265175  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1009 19:02:17.269434  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1009 19:02:17.275715  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1009 19:02:17.276082  296772 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1009 19:02:17.298874  296772 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1009 19:02:17.302980  296772 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 19:02:17.303048  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1009 19:02:17.303152  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.309192  296772 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 19:02:17.313238  296772 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 19:02:17.318938  296772 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 19:02:17.319016  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1009 19:02:17.319122  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.319308  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.320120  296772 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1009 19:02:17.320343  296772 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1009 19:02:17.320534  296772 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1009 19:02:17.320547  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1009 19:02:17.320605  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.348467  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1009 19:02:17.351702  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1009 19:02:17.354629  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1009 19:02:17.358905  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1009 19:02:17.341574  296772 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1009 19:02:17.363352  296772 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1009 19:02:17.341589  296772 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:02:17.341633  296772 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 19:02:17.369546  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1009 19:02:17.341653  296772 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1009 19:02:17.369571  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1009 19:02:17.369652  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.372383  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.375298  296772 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1009 19:02:17.375372  296772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1009 19:02:17.375472  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.381310  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.363303  296772 out.go:179]   - Using image docker.io/registry:3.0.0
	I1009 19:02:17.381772  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.394795  296772 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:02:17.394817  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:02:17.394881  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.412283  296772 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1009 19:02:17.412303  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1009 19:02:17.412383  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.416946  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.421604  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.448568  296772 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:02:17.448590  296772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:02:17.448656  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.449226  296772 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1009 19:02:17.455501  296772 out.go:179]   - Using image docker.io/busybox:stable
	I1009 19:02:17.461252  296772 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 19:02:17.461284  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1009 19:02:17.461353  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.537349  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.561655  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.569258  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.583393  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.598493  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.614264  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.617004  296772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:02:17.616904  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.620359  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.630415  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.638218  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.646030  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	W1009 19:02:17.647815  296772 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1009 19:02:17.647846  296772 retry.go:31] will retry after 191.166269ms: ssh: handshake failed: EOF
	W1009 19:02:17.648056  296772 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1009 19:02:17.648069  296772 retry.go:31] will retry after 185.806398ms: ssh: handshake failed: EOF
	I1009 19:02:18.292967  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 19:02:18.363586  296772 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1009 19:02:18.363653  296772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1009 19:02:18.390691  296772 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1009 19:02:18.390767  296772 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1009 19:02:18.403835  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1009 19:02:18.417256  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:02:18.451364  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1009 19:02:18.453682  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 19:02:18.463989  296772 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1009 19:02:18.464063  296772 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1009 19:02:18.482145  296772 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 19:02:18.482219  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1009 19:02:18.511906  296772 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1009 19:02:18.511983  296772 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1009 19:02:18.520580  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 19:02:18.530824  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 19:02:18.539830  296772 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1009 19:02:18.539906  296772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1009 19:02:18.543672  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1009 19:02:18.565433  296772 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:18.565510  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1009 19:02:18.591582  296772 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1009 19:02:18.591610  296772 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1009 19:02:18.602946  296772 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1009 19:02:18.602972  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1009 19:02:18.664415  296772 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 19:02:18.664442  296772 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 19:02:18.691164  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:02:18.736210  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:18.745350  296772 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1009 19:02:18.745382  296772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1009 19:02:18.755057  296772 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1009 19:02:18.755084  296772 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1009 19:02:18.787758  296772 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1009 19:02:18.787798  296772 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1009 19:02:18.831733  296772 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 19:02:18.831779  296772 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 19:02:18.833040  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1009 19:02:18.959342  296772 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1009 19:02:18.959365  296772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1009 19:02:18.966191  296772 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1009 19:02:18.966213  296772 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1009 19:02:18.969509  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 19:02:18.971390  296772 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1009 19:02:18.971406  296772 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1009 19:02:19.152276  296772 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1009 19:02:19.152349  296772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1009 19:02:19.178753  296772 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1009 19:02:19.178822  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1009 19:02:19.249679  296772 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 19:02:19.249749  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1009 19:02:19.299279  296772 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1009 19:02:19.299355  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1009 19:02:19.318885  296772 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1009 19:02:19.318963  296772 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1009 19:02:19.370928  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1009 19:02:19.376097  296772 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1009 19:02:19.376119  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1009 19:02:19.418347  296772 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.177674775s)
	I1009 19:02:19.418377  296772 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1009 19:02:19.418443  296772 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.801419696s)
	I1009 19:02:19.419208  296772 node_ready.go:35] waiting up to 6m0s for node "addons-999657" to be "Ready" ...
	I1009 19:02:19.524864  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 19:02:19.628438  296772 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1009 19:02:19.628463  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1009 19:02:19.804109  296772 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 19:02:19.804134  296772 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1009 19:02:19.916858  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 19:02:19.924150  296772 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-999657" context rescaled to 1 replicas
	W1009 19:02:21.462792  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:23.221940  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.92889567s)
	I1009 19:02:23.221975  296772 addons.go:479] Verifying addon ingress=true in "addons-999657"
	I1009 19:02:23.222060  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.818139453s)
	I1009 19:02:23.222148  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.804817631s)
	I1009 19:02:23.222194  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.770760015s)
	I1009 19:02:23.222260  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.768506929s)
	I1009 19:02:23.222507  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.701861401s)
	I1009 19:02:23.222564  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.691661522s)
	I1009 19:02:23.222596  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.6788585s)
	I1009 19:02:23.222637  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.531449671s)
	I1009 19:02:23.222937  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.486687201s)
	W1009 19:02:23.222966  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:23.222982  296772 retry.go:31] will retry after 326.662579ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:23.223014  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.389949918s)
	I1009 19:02:23.223024  296772 addons.go:479] Verifying addon registry=true in "addons-999657"
	I1009 19:02:23.223479  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.253938567s)
	I1009 19:02:23.223497  296772 addons.go:479] Verifying addon metrics-server=true in "addons-999657"
	I1009 19:02:23.223535  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.852578615s)
	I1009 19:02:23.223658  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.698765181s)
	W1009 19:02:23.223678  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 19:02:23.223690  296772 retry.go:31] will retry after 369.744237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 19:02:23.225836  296772 out.go:179] * Verifying ingress addon...
	I1009 19:02:23.227957  296772 out.go:179] * Verifying registry addon...
	I1009 19:02:23.227973  296772 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-999657 service yakd-dashboard -n yakd-dashboard
	
	I1009 19:02:23.230764  296772 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1009 19:02:23.233823  296772 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1009 19:02:23.236975  296772 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1009 19:02:23.236994  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1009 19:02:23.242537  296772 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1009 19:02:23.292030  296772 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 19:02:23.292057  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:23.493373  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.576465997s)
	I1009 19:02:23.493459  296772 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-999657"
	I1009 19:02:23.496704  296772 out.go:179] * Verifying csi-hostpath-driver addon...
	I1009 19:02:23.501548  296772 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1009 19:02:23.507130  296772 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 19:02:23.507156  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:23.550495  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:23.594370  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 19:02:23.745805  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:23.745961  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1009 19:02:23.923211  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:24.010821  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:24.236729  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:24.238343  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:24.506122  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:24.628713  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.078177838s)
	W1009 19:02:24.628748  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:24.628769  296772 retry.go:31] will retry after 435.774096ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:24.628859  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.034455942s)
	I1009 19:02:24.734899  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:24.737645  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:24.970027  296772 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1009 19:02:24.970116  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:24.987218  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:25.005663  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:25.064753  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:25.113195  296772 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1009 19:02:25.129726  296772 addons.go:238] Setting addon gcp-auth=true in "addons-999657"
	I1009 19:02:25.129778  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:25.130238  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:25.158584  296772 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1009 19:02:25.158635  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:25.177902  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:25.234392  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:25.236294  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:25.505554  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:25.735177  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:25.742866  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:25.905789  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:25.905828  296772 retry.go:31] will retry after 385.413564ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:25.909505  296772 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 19:02:25.912481  296772 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1009 19:02:25.915230  296772 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1009 19:02:25.915255  296772 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1009 19:02:25.930665  296772 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1009 19:02:25.930688  296772 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1009 19:02:25.944562  296772 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 19:02:25.944586  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1009 19:02:25.958506  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 19:02:26.005041  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:26.235488  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:26.238116  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:26.292400  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1009 19:02:26.430002  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:26.521858  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:26.568749  296772 addons.go:479] Verifying addon gcp-auth=true in "addons-999657"
	I1009 19:02:26.571967  296772 out.go:179] * Verifying gcp-auth addon...
	I1009 19:02:26.575714  296772 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1009 19:02:26.579054  296772 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1009 19:02:26.579078  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:26.740069  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:26.740467  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:27.004763  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:27.078618  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1009 19:02:27.210908  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:27.210938  296772 retry.go:31] will retry after 642.044981ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:27.234162  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:27.236655  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:27.505554  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:27.579360  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:27.742325  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:27.742941  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:27.853255  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:28.005289  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:28.079550  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:28.238422  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:28.239028  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:28.504856  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:28.579493  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1009 19:02:28.671747  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:28.671780  296772 retry.go:31] will retry after 875.659797ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:28.734229  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:28.736685  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:28.922745  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:29.006183  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:29.079092  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:29.234454  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:29.236804  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:29.504609  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:29.547745  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:29.579116  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:29.735020  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:29.737210  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:30.004711  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:30.089864  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:30.238919  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:30.239624  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1009 19:02:30.391750  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:30.391783  296772 retry.go:31] will retry after 2.340587157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:30.504452  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:30.579268  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:30.734979  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:30.737238  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:31.005297  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:31.079377  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:31.234569  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:31.236632  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:31.422724  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:31.504535  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:31.579623  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:31.735940  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:31.737613  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:32.005914  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:32.078909  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:32.234298  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:32.236523  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:32.504379  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:32.579198  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:32.733575  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:32.746517  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:32.746813  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:33.005614  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:33.079676  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:33.234686  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:33.237131  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:33.425045  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:33.504728  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 19:02:33.543312  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:33.543390  296772 retry.go:31] will retry after 2.399666522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:33.579349  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:33.739809  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:33.742840  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:34.005018  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:34.079297  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:34.234939  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:34.237265  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:34.505412  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:34.579318  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:34.735148  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:34.742230  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:35.004695  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:35.078635  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:35.233865  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:35.236184  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:35.505059  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:35.579181  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:35.734907  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:35.737196  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:35.922215  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:35.943478  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:36.008993  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:36.078991  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:36.235475  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:36.237255  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:36.505209  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:36.579745  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:36.738152  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:36.742355  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:36.773640  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:36.773674  296772 retry.go:31] will retry after 6.060744408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:37.004813  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:37.078837  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:37.234842  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:37.237012  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:37.504775  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:37.578729  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:37.740891  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:37.740979  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:37.922959  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:38.004581  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:38.079757  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:38.234489  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:38.236938  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:38.505203  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:38.579386  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:38.735787  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:38.738538  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:39.006049  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:39.079721  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:39.234049  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:39.236579  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:39.504726  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:39.579759  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:39.733972  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:39.736360  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:40.005472  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:40.086286  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:40.235084  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:40.238188  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:40.422482  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:40.504466  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:40.579541  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:40.736982  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:40.738263  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:41.005436  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:41.080001  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:41.234453  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:41.236436  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:41.504560  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:41.579675  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:41.736332  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:41.741723  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:42.004493  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:42.079760  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:42.235569  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:42.237598  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:42.505376  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:42.579434  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:42.734383  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:42.741144  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:42.835375  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1009 19:02:42.922734  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:43.005307  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:43.079914  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:43.234613  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:43.236408  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:43.505181  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:43.579552  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1009 19:02:43.652311  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:43.652347  296772 retry.go:31] will retry after 9.23352868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:43.734906  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:43.740048  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:44.004569  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:44.079651  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:44.235672  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:44.237067  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:44.505280  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:44.579251  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:44.735515  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:44.737890  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:44.922896  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:45.004579  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:45.080688  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:45.238073  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:45.238164  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:45.505719  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:45.580209  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:45.739656  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:45.740422  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:46.004742  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:46.082188  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:46.235335  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:46.237831  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:46.505570  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:46.579412  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:46.735361  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:46.737512  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:46.923283  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:47.005226  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:47.079326  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:47.234519  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:47.236649  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:47.505631  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:47.579575  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:47.736220  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:47.738505  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:48.004926  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:48.078973  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:48.233980  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:48.237218  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:48.505708  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:48.579609  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:48.735355  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:48.737823  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:49.005322  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:49.079519  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:49.234994  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:49.237161  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:49.422272  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:49.505487  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:49.579864  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:49.734405  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:49.736747  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:50.004578  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:50.079898  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:50.234467  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:50.236744  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:50.505759  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:50.578600  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:50.738090  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:50.739827  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:51.005199  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:51.079530  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:51.234843  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:51.237354  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:51.422525  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:51.504683  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:51.578955  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:51.736011  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:51.737597  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:52.004989  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:52.078973  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:52.234658  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:52.236925  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:52.505411  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:52.579622  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:52.740742  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:52.744193  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:52.886290  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:53.005147  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:53.079368  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:53.234860  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:53.237189  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:53.423282  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:53.506303  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:53.580448  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1009 19:02:53.710854  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:53.710892  296772 retry.go:31] will retry after 10.565917129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:53.735899  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:53.737404  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:54.004848  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:54.079275  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:54.234639  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:54.237285  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:54.505251  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:54.579378  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:54.735859  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:54.737905  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:55.005035  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:55.079499  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:55.235265  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:55.236843  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:55.505439  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:55.579413  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:55.735919  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:55.740967  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:55.923207  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:56.005018  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:56.078910  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:56.234192  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:56.236879  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:56.505534  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:56.579341  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:56.734655  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:56.737211  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:57.005068  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:57.079002  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:57.234751  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:57.237144  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:57.505277  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:57.579260  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:57.735068  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:57.737403  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:58.064586  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:58.130212  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:58.287242  296772 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 19:02:58.287267  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:58.287658  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:58.471744  296772 node_ready.go:49] node "addons-999657" is "Ready"
	I1009 19:02:58.471777  296772 node_ready.go:38] duration metric: took 39.052545505s for node "addons-999657" to be "Ready" ...
	I1009 19:02:58.471791  296772 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:02:58.471850  296772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:02:58.505398  296772 api_server.go:72] duration metric: took 41.632931932s to wait for apiserver process to appear ...
	I1009 19:02:58.505424  296772 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:02:58.505452  296772 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1009 19:02:58.528264  296772 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 19:02:58.528290  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:58.528780  296772 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1009 19:02:58.538888  296772 api_server.go:141] control plane version: v1.34.1
	I1009 19:02:58.538922  296772 api_server.go:131] duration metric: took 33.489056ms to wait for apiserver health ...
	I1009 19:02:58.538932  296772 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:02:58.556864  296772 system_pods.go:59] 19 kube-system pods found
	I1009 19:02:58.556909  296772 system_pods.go:61] "coredns-66bc5c9577-dm266" [2bf7787a-2738-43b9-8632-2b4157093789] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:02:58.556917  296772 system_pods.go:61] "csi-hostpath-attacher-0" [d9525068-4ed2-4fb0-a039-6768fd5cb26d] Pending
	I1009 19:02:58.556924  296772 system_pods.go:61] "csi-hostpath-resizer-0" [fe8c3320-7341-44ca-a991-ceaa16169a16] Pending
	I1009 19:02:58.556928  296772 system_pods.go:61] "csi-hostpathplugin-4b7rw" [5d573ce2-a134-413f-bb7e-939b263f86b2] Pending
	I1009 19:02:58.556933  296772 system_pods.go:61] "etcd-addons-999657" [e291d5e1-16ff-4e53-a970-58bf8afdf50c] Running
	I1009 19:02:58.556937  296772 system_pods.go:61] "kindnet-rztm2" [cbd574f9-584b-4118-ac18-abf4a715e249] Running
	I1009 19:02:58.556942  296772 system_pods.go:61] "kube-apiserver-addons-999657" [b8c9a6f4-5ae1-4faa-93f7-41c4f1100242] Running
	I1009 19:02:58.556947  296772 system_pods.go:61] "kube-controller-manager-addons-999657" [2e50195e-1447-4d4f-9ac4-c40cd15dcd11] Running
	I1009 19:02:58.556957  296772 system_pods.go:61] "kube-ingress-dns-minikube" [a0996862-e9b2-4ebd-9384-3018e153d32b] Pending
	I1009 19:02:58.556962  296772 system_pods.go:61] "kube-proxy-jcwfl" [07e1a1bf-5df2-4e42-8302-7d69acb08479] Running
	I1009 19:02:58.556969  296772 system_pods.go:61] "kube-scheduler-addons-999657" [c297d383-d2f4-4f34-82f6-76d68291ccbf] Running
	I1009 19:02:58.556977  296772 system_pods.go:61] "metrics-server-85b7d694d7-qgbgn" [1b9f013c-1ebf-4d60-b677-f20de508376a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 19:02:58.556987  296772 system_pods.go:61] "nvidia-device-plugin-daemonset-4lmwx" [2cd943cc-3d6e-418d-ab07-d6fe025ccc38] Pending
	I1009 19:02:58.556994  296772 system_pods.go:61] "registry-66898fdd98-d8jgl" [d398f51d-a918-4ce0-89c9-47064bd1ae01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 19:02:58.557001  296772 system_pods.go:61] "registry-creds-764b6fb674-gq9vn" [bbaa910d-1ec1-4260-9cf0-961ed5abd1c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 19:02:58.557009  296772 system_pods.go:61] "registry-proxy-q9p6k" [ae33fd9b-bb98-4d1b-9150-ab438ca12680] Pending
	I1009 19:02:58.557017  296772 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jp7nw" [56d55ae5-a807-484f-868d-0d3f3d1b14f6] Pending
	I1009 19:02:58.557022  296772 system_pods.go:61] "snapshot-controller-7d9fbc56b8-txqvb" [89dc3810-4bbb-4414-91c4-558f2d3651fd] Pending
	I1009 19:02:58.557038  296772 system_pods.go:61] "storage-provisioner" [eb0bf8bf-b888-410e-86f0-da0dec609732] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:02:58.557045  296772 system_pods.go:74] duration metric: took 18.105943ms to wait for pod list to return data ...
	I1009 19:02:58.557056  296772 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:02:58.567798  296772 default_sa.go:45] found service account: "default"
	I1009 19:02:58.567826  296772 default_sa.go:55] duration metric: took 10.761935ms for default service account to be created ...
	I1009 19:02:58.567837  296772 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:02:58.604373  296772 system_pods.go:86] 19 kube-system pods found
	I1009 19:02:58.604413  296772 system_pods.go:89] "coredns-66bc5c9577-dm266" [2bf7787a-2738-43b9-8632-2b4157093789] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:02:58.604423  296772 system_pods.go:89] "csi-hostpath-attacher-0" [d9525068-4ed2-4fb0-a039-6768fd5cb26d] Pending
	I1009 19:02:58.604428  296772 system_pods.go:89] "csi-hostpath-resizer-0" [fe8c3320-7341-44ca-a991-ceaa16169a16] Pending
	I1009 19:02:58.604433  296772 system_pods.go:89] "csi-hostpathplugin-4b7rw" [5d573ce2-a134-413f-bb7e-939b263f86b2] Pending
	I1009 19:02:58.604437  296772 system_pods.go:89] "etcd-addons-999657" [e291d5e1-16ff-4e53-a970-58bf8afdf50c] Running
	I1009 19:02:58.604442  296772 system_pods.go:89] "kindnet-rztm2" [cbd574f9-584b-4118-ac18-abf4a715e249] Running
	I1009 19:02:58.604446  296772 system_pods.go:89] "kube-apiserver-addons-999657" [b8c9a6f4-5ae1-4faa-93f7-41c4f1100242] Running
	I1009 19:02:58.604451  296772 system_pods.go:89] "kube-controller-manager-addons-999657" [2e50195e-1447-4d4f-9ac4-c40cd15dcd11] Running
	I1009 19:02:58.604459  296772 system_pods.go:89] "kube-ingress-dns-minikube" [a0996862-e9b2-4ebd-9384-3018e153d32b] Pending
	I1009 19:02:58.604463  296772 system_pods.go:89] "kube-proxy-jcwfl" [07e1a1bf-5df2-4e42-8302-7d69acb08479] Running
	I1009 19:02:58.604474  296772 system_pods.go:89] "kube-scheduler-addons-999657" [c297d383-d2f4-4f34-82f6-76d68291ccbf] Running
	I1009 19:02:58.604480  296772 system_pods.go:89] "metrics-server-85b7d694d7-qgbgn" [1b9f013c-1ebf-4d60-b677-f20de508376a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 19:02:58.604491  296772 system_pods.go:89] "nvidia-device-plugin-daemonset-4lmwx" [2cd943cc-3d6e-418d-ab07-d6fe025ccc38] Pending
	I1009 19:02:58.604500  296772 system_pods.go:89] "registry-66898fdd98-d8jgl" [d398f51d-a918-4ce0-89c9-47064bd1ae01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 19:02:58.604513  296772 system_pods.go:89] "registry-creds-764b6fb674-gq9vn" [bbaa910d-1ec1-4260-9cf0-961ed5abd1c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 19:02:58.604518  296772 system_pods.go:89] "registry-proxy-q9p6k" [ae33fd9b-bb98-4d1b-9150-ab438ca12680] Pending
	I1009 19:02:58.604522  296772 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jp7nw" [56d55ae5-a807-484f-868d-0d3f3d1b14f6] Pending
	I1009 19:02:58.604526  296772 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txqvb" [89dc3810-4bbb-4414-91c4-558f2d3651fd] Pending
	I1009 19:02:58.604540  296772 system_pods.go:89] "storage-provisioner" [eb0bf8bf-b888-410e-86f0-da0dec609732] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:02:58.604556  296772 retry.go:31] will retry after 311.792981ms: missing components: kube-dns
	I1009 19:02:58.605295  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:58.767371  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:58.769163  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:58.933628  296772 system_pods.go:86] 19 kube-system pods found
	I1009 19:02:58.933669  296772 system_pods.go:89] "coredns-66bc5c9577-dm266" [2bf7787a-2738-43b9-8632-2b4157093789] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:02:58.933676  296772 system_pods.go:89] "csi-hostpath-attacher-0" [d9525068-4ed2-4fb0-a039-6768fd5cb26d] Pending
	I1009 19:02:58.933684  296772 system_pods.go:89] "csi-hostpath-resizer-0" [fe8c3320-7341-44ca-a991-ceaa16169a16] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 19:02:58.933688  296772 system_pods.go:89] "csi-hostpathplugin-4b7rw" [5d573ce2-a134-413f-bb7e-939b263f86b2] Pending
	I1009 19:02:58.933693  296772 system_pods.go:89] "etcd-addons-999657" [e291d5e1-16ff-4e53-a970-58bf8afdf50c] Running
	I1009 19:02:58.933698  296772 system_pods.go:89] "kindnet-rztm2" [cbd574f9-584b-4118-ac18-abf4a715e249] Running
	I1009 19:02:58.933703  296772 system_pods.go:89] "kube-apiserver-addons-999657" [b8c9a6f4-5ae1-4faa-93f7-41c4f1100242] Running
	I1009 19:02:58.933707  296772 system_pods.go:89] "kube-controller-manager-addons-999657" [2e50195e-1447-4d4f-9ac4-c40cd15dcd11] Running
	I1009 19:02:58.933719  296772 system_pods.go:89] "kube-ingress-dns-minikube" [a0996862-e9b2-4ebd-9384-3018e153d32b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 19:02:58.933724  296772 system_pods.go:89] "kube-proxy-jcwfl" [07e1a1bf-5df2-4e42-8302-7d69acb08479] Running
	I1009 19:02:58.933736  296772 system_pods.go:89] "kube-scheduler-addons-999657" [c297d383-d2f4-4f34-82f6-76d68291ccbf] Running
	I1009 19:02:58.933743  296772 system_pods.go:89] "metrics-server-85b7d694d7-qgbgn" [1b9f013c-1ebf-4d60-b677-f20de508376a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 19:02:58.933757  296772 system_pods.go:89] "nvidia-device-plugin-daemonset-4lmwx" [2cd943cc-3d6e-418d-ab07-d6fe025ccc38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 19:02:58.933764  296772 system_pods.go:89] "registry-66898fdd98-d8jgl" [d398f51d-a918-4ce0-89c9-47064bd1ae01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 19:02:58.933776  296772 system_pods.go:89] "registry-creds-764b6fb674-gq9vn" [bbaa910d-1ec1-4260-9cf0-961ed5abd1c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 19:02:58.933781  296772 system_pods.go:89] "registry-proxy-q9p6k" [ae33fd9b-bb98-4d1b-9150-ab438ca12680] Pending
	I1009 19:02:58.933787  296772 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jp7nw" [56d55ae5-a807-484f-868d-0d3f3d1b14f6] Pending
	I1009 19:02:58.933803  296772 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txqvb" [89dc3810-4bbb-4414-91c4-558f2d3651fd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 19:02:58.933810  296772 system_pods.go:89] "storage-provisioner" [eb0bf8bf-b888-410e-86f0-da0dec609732] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:02:58.933829  296772 retry.go:31] will retry after 235.971577ms: missing components: kube-dns
	I1009 19:02:59.010521  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:59.110398  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:59.213154  296772 system_pods.go:86] 19 kube-system pods found
	I1009 19:02:59.213193  296772 system_pods.go:89] "coredns-66bc5c9577-dm266" [2bf7787a-2738-43b9-8632-2b4157093789] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:02:59.213207  296772 system_pods.go:89] "csi-hostpath-attacher-0" [d9525068-4ed2-4fb0-a039-6768fd5cb26d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 19:02:59.213217  296772 system_pods.go:89] "csi-hostpath-resizer-0" [fe8c3320-7341-44ca-a991-ceaa16169a16] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 19:02:59.213226  296772 system_pods.go:89] "csi-hostpathplugin-4b7rw" [5d573ce2-a134-413f-bb7e-939b263f86b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1009 19:02:59.213235  296772 system_pods.go:89] "etcd-addons-999657" [e291d5e1-16ff-4e53-a970-58bf8afdf50c] Running
	I1009 19:02:59.213244  296772 system_pods.go:89] "kindnet-rztm2" [cbd574f9-584b-4118-ac18-abf4a715e249] Running
	I1009 19:02:59.213249  296772 system_pods.go:89] "kube-apiserver-addons-999657" [b8c9a6f4-5ae1-4faa-93f7-41c4f1100242] Running
	I1009 19:02:59.213260  296772 system_pods.go:89] "kube-controller-manager-addons-999657" [2e50195e-1447-4d4f-9ac4-c40cd15dcd11] Running
	I1009 19:02:59.213267  296772 system_pods.go:89] "kube-ingress-dns-minikube" [a0996862-e9b2-4ebd-9384-3018e153d32b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 19:02:59.213277  296772 system_pods.go:89] "kube-proxy-jcwfl" [07e1a1bf-5df2-4e42-8302-7d69acb08479] Running
	I1009 19:02:59.213282  296772 system_pods.go:89] "kube-scheduler-addons-999657" [c297d383-d2f4-4f34-82f6-76d68291ccbf] Running
	I1009 19:02:59.213290  296772 system_pods.go:89] "metrics-server-85b7d694d7-qgbgn" [1b9f013c-1ebf-4d60-b677-f20de508376a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 19:02:59.213301  296772 system_pods.go:89] "nvidia-device-plugin-daemonset-4lmwx" [2cd943cc-3d6e-418d-ab07-d6fe025ccc38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 19:02:59.213308  296772 system_pods.go:89] "registry-66898fdd98-d8jgl" [d398f51d-a918-4ce0-89c9-47064bd1ae01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 19:02:59.213317  296772 system_pods.go:89] "registry-creds-764b6fb674-gq9vn" [bbaa910d-1ec1-4260-9cf0-961ed5abd1c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 19:02:59.213323  296772 system_pods.go:89] "registry-proxy-q9p6k" [ae33fd9b-bb98-4d1b-9150-ab438ca12680] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 19:02:59.213329  296772 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jp7nw" [56d55ae5-a807-484f-868d-0d3f3d1b14f6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 19:02:59.213338  296772 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txqvb" [89dc3810-4bbb-4414-91c4-558f2d3651fd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 19:02:59.213348  296772 system_pods.go:89] "storage-provisioner" [eb0bf8bf-b888-410e-86f0-da0dec609732] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:02:59.213364  296772 retry.go:31] will retry after 342.914299ms: missing components: kube-dns
	I1009 19:02:59.312357  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:59.312539  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:59.504871  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:59.561278  296772 system_pods.go:86] 19 kube-system pods found
	I1009 19:02:59.561309  296772 system_pods.go:89] "coredns-66bc5c9577-dm266" [2bf7787a-2738-43b9-8632-2b4157093789] Running
	I1009 19:02:59.561319  296772 system_pods.go:89] "csi-hostpath-attacher-0" [d9525068-4ed2-4fb0-a039-6768fd5cb26d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 19:02:59.561327  296772 system_pods.go:89] "csi-hostpath-resizer-0" [fe8c3320-7341-44ca-a991-ceaa16169a16] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 19:02:59.561335  296772 system_pods.go:89] "csi-hostpathplugin-4b7rw" [5d573ce2-a134-413f-bb7e-939b263f86b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1009 19:02:59.561340  296772 system_pods.go:89] "etcd-addons-999657" [e291d5e1-16ff-4e53-a970-58bf8afdf50c] Running
	I1009 19:02:59.561344  296772 system_pods.go:89] "kindnet-rztm2" [cbd574f9-584b-4118-ac18-abf4a715e249] Running
	I1009 19:02:59.561353  296772 system_pods.go:89] "kube-apiserver-addons-999657" [b8c9a6f4-5ae1-4faa-93f7-41c4f1100242] Running
	I1009 19:02:59.561359  296772 system_pods.go:89] "kube-controller-manager-addons-999657" [2e50195e-1447-4d4f-9ac4-c40cd15dcd11] Running
	I1009 19:02:59.561366  296772 system_pods.go:89] "kube-ingress-dns-minikube" [a0996862-e9b2-4ebd-9384-3018e153d32b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 19:02:59.561375  296772 system_pods.go:89] "kube-proxy-jcwfl" [07e1a1bf-5df2-4e42-8302-7d69acb08479] Running
	I1009 19:02:59.561380  296772 system_pods.go:89] "kube-scheduler-addons-999657" [c297d383-d2f4-4f34-82f6-76d68291ccbf] Running
	I1009 19:02:59.561388  296772 system_pods.go:89] "metrics-server-85b7d694d7-qgbgn" [1b9f013c-1ebf-4d60-b677-f20de508376a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 19:02:59.561399  296772 system_pods.go:89] "nvidia-device-plugin-daemonset-4lmwx" [2cd943cc-3d6e-418d-ab07-d6fe025ccc38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 19:02:59.561405  296772 system_pods.go:89] "registry-66898fdd98-d8jgl" [d398f51d-a918-4ce0-89c9-47064bd1ae01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 19:02:59.561416  296772 system_pods.go:89] "registry-creds-764b6fb674-gq9vn" [bbaa910d-1ec1-4260-9cf0-961ed5abd1c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 19:02:59.561421  296772 system_pods.go:89] "registry-proxy-q9p6k" [ae33fd9b-bb98-4d1b-9150-ab438ca12680] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 19:02:59.561434  296772 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jp7nw" [56d55ae5-a807-484f-868d-0d3f3d1b14f6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 19:02:59.561452  296772 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txqvb" [89dc3810-4bbb-4414-91c4-558f2d3651fd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 19:02:59.561462  296772 system_pods.go:89] "storage-provisioner" [eb0bf8bf-b888-410e-86f0-da0dec609732] Running
	I1009 19:02:59.561472  296772 system_pods.go:126] duration metric: took 993.628633ms to wait for k8s-apps to be running ...
	I1009 19:02:59.561485  296772 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:02:59.561543  296772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:02:59.576311  296772 system_svc.go:56] duration metric: took 14.817504ms WaitForService to wait for kubelet
	I1009 19:02:59.576341  296772 kubeadm.go:586] duration metric: took 42.703881644s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:02:59.576360  296772 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:02:59.580568  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:59.581245  296772 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:02:59.581274  296772 node_conditions.go:123] node cpu capacity is 2
	I1009 19:02:59.581287  296772 node_conditions.go:105] duration metric: took 4.921468ms to run NodePressure ...
	I1009 19:02:59.581300  296772 start.go:242] waiting for startup goroutines ...
	I1009 19:02:59.734413  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:59.743034  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:00.005762  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:00.081352  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:00.247940  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:00.249361  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:00.507381  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:00.580189  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:00.741689  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:00.742206  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:01.006194  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:01.106710  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:01.234068  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:01.236612  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:01.506638  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:01.580440  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:01.735404  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:01.743414  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:02.007084  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:02.079737  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:02.234326  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:02.236624  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:02.505175  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:02.579344  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:02.734862  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:02.742557  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:03.005169  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:03.079494  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:03.235072  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:03.237516  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:03.505586  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:03.579945  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:03.742914  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:03.744178  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:04.006601  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:04.079951  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:04.234012  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:04.236432  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:04.277716  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:03:04.505344  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:04.578982  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:04.740078  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:04.742147  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:05.006015  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:05.079853  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:05.235047  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:05.237342  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:05.317517  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.039761002s)
	W1009 19:03:05.317559  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:05.317578  296772 retry.go:31] will retry after 13.628467829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:05.511442  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:05.578994  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:05.736983  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:05.739046  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:06.006194  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:06.079617  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:06.237174  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:06.239134  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:06.505573  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:06.579832  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:06.734373  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:06.737041  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:07.006276  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:07.079603  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:07.236406  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:07.238629  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:07.505132  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:07.606004  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:07.740394  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:07.740838  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:08.006310  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:08.080398  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:08.236102  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:08.238526  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:08.506578  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:08.586785  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:08.736376  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:08.738138  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:09.008241  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:09.079967  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:09.235559  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:09.238639  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:09.506075  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:09.583900  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:09.738282  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:09.739900  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:10.005974  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:10.105947  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:10.234219  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:10.236144  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:10.505614  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:10.584592  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:10.741447  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:10.743204  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:11.005173  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:11.079438  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:11.234650  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:11.237185  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:11.506821  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:11.579716  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:11.734861  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:11.743685  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:12.005843  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:12.079285  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:12.237945  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:12.238756  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:12.505965  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:12.579304  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:12.737151  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:12.742511  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:13.005439  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:13.080595  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:13.235238  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:13.238224  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:13.506572  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:13.607121  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:13.739614  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:13.741932  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:14.006631  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:14.106476  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:14.234893  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:14.237247  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:14.506406  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:14.579881  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:14.738640  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:14.743240  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:15.005796  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:15.104889  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:15.235715  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:15.236980  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:15.506428  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:15.579373  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:15.739684  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:15.740171  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:16.006141  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:16.079360  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:16.237127  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:16.239115  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:16.505964  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:16.579293  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:16.734963  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:16.737073  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:17.005889  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:17.079043  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:17.236774  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:17.239577  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:17.505240  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:17.579051  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:17.735166  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:17.737424  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:18.004841  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:18.079249  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:18.235614  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:18.237376  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:18.504955  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:18.579280  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:18.734738  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:18.737025  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:18.946306  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:03:19.005982  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:19.078969  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:19.235000  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:19.241401  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:19.505535  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:19.579909  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:19.734674  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:19.737567  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:03:19.880643  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:19.880690  296772 retry.go:31] will retry after 31.146680689s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:20.005516  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:20.079859  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:20.235979  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:20.237713  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:20.505989  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:20.579685  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:20.734974  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:20.738703  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:21.006037  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:21.080176  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:21.235112  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:21.238406  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:21.505847  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:21.579361  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:21.735215  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:21.737717  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:22.005308  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:22.079772  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:22.234084  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:22.236625  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:22.514909  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:22.580012  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:22.734681  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:22.737544  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:23.005272  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:23.079660  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:23.235512  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:23.238686  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:23.506184  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:23.579713  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:23.734784  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:23.738576  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:24.006024  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:24.079334  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:24.235358  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:24.238732  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:24.505606  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:24.579784  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:24.734977  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:24.742510  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:25.005423  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:25.080071  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:25.235487  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:25.238297  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:25.506705  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:25.578959  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:25.739787  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:25.742261  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:26.005544  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:26.080158  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:26.235371  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:26.238256  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:26.506806  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:26.580762  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:26.733920  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:26.739206  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:27.006339  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:27.079592  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:27.234644  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:27.236885  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:27.506147  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:27.578947  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:27.734675  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:27.743827  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:28.006460  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:28.080404  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:28.234392  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:28.236253  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:28.505284  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:28.579708  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:28.739298  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:28.746115  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:29.005650  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:29.079417  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:29.234472  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:29.237447  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:29.505673  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:29.578862  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:29.733935  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:29.736268  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:30.008336  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:30.083171  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:30.234830  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:30.238228  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:30.505761  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:30.578925  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:30.739308  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:30.744894  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:31.005614  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:31.106489  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:31.236057  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:31.237501  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:31.506350  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:31.606267  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:31.738312  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:31.738575  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:32.004817  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:32.079673  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:32.234737  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:32.238044  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:32.505902  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:32.579195  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:32.752144  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:32.753176  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:33.005515  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:33.105890  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:33.234209  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:33.238034  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:33.505926  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:33.578990  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:33.742506  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:33.754177  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:34.005670  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:34.080791  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:34.234403  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:34.237072  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:34.506450  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:34.579421  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:34.739358  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:34.745294  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:35.005699  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:35.078990  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:35.234384  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:35.237245  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:35.506696  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:35.579029  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:35.738340  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:35.740689  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:36.005277  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:36.079399  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:36.239655  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:36.240750  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:36.508777  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:36.608284  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:36.749786  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:36.755504  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:37.006031  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:37.079270  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:37.234913  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:37.237692  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:37.506051  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:37.579784  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:37.737605  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:37.742761  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:38.005627  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:38.106250  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:38.234475  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:38.236811  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:38.506264  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:38.579432  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:38.737886  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:38.738038  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:39.006094  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:39.079808  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:39.235279  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:39.237235  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:39.506061  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:39.579349  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:39.736168  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:39.737912  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:40.005464  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:40.080333  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:40.234461  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:40.236667  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:40.505547  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:40.579353  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:40.735133  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:40.738637  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:41.006306  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:41.106525  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:41.237033  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:41.245806  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:41.506716  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:41.606486  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:41.743537  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:41.744779  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:42.005453  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:42.079554  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:42.235268  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:42.238540  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:42.505989  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:42.579025  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:42.740468  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:42.745565  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:43.005820  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:43.079270  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:43.235072  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:43.237696  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:43.506633  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:43.606338  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:43.789457  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:43.791397  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:44.005905  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:44.079784  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:44.235521  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:44.242091  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:44.506355  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:44.580164  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:44.741996  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:44.742396  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:45.012391  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:45.108851  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:45.238680  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:45.239038  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:45.507358  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:45.579677  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:45.735928  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:45.758188  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:46.019599  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:46.080125  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:46.237321  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:46.239837  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:46.505708  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:46.578943  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:46.746444  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:46.747889  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:47.006628  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:47.078702  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:47.235828  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:47.237615  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:47.505764  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:47.579081  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:47.741735  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:47.743491  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:48.006411  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:48.079486  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:48.235311  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:48.239083  296772 kapi.go:107] duration metric: took 1m25.005255973s to wait for kubernetes.io/minikube-addons=registry ...
	I1009 19:03:48.506048  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:48.579358  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:48.740175  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:49.006246  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:49.079099  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:49.234332  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:49.505479  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:49.580023  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:49.738286  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:50.005201  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:50.079709  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:50.234277  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:50.505506  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:50.580343  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:50.734850  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:51.005036  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:51.028335  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:03:51.086442  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:51.235543  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:51.504875  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:51.579596  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:51.735656  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:52.007281  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:52.080164  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:52.234058  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:52.280483  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.252110986s)
	W1009 19:03:52.280577  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:52.280707  296772 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:03:52.510999  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:52.591940  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:52.734088  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:53.005849  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:53.079075  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:53.235439  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:53.505585  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:53.579912  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:53.746385  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:54.007804  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:54.079003  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:54.234445  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:54.506236  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:54.579839  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:54.739357  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:55.005722  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:55.106214  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:55.234362  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:55.507201  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:55.579552  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:55.740233  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:56.008124  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:56.107903  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:56.237458  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:56.508961  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:56.579294  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:56.740532  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:57.007473  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:57.080747  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:57.236388  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:57.506626  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:57.579733  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:57.805557  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:58.026032  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:58.079500  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:58.234538  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:58.505198  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:58.579028  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:58.734793  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:59.008149  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:59.082852  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:59.233789  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:59.505458  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:59.579622  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:59.738953  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:00.012121  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:04:00.095005  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:00.264125  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:00.511225  296772 kapi.go:107] duration metric: took 1m37.009672788s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1009 19:04:00.580046  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:00.740700  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:01.079032  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:01.234952  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:01.578840  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:01.740839  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:02.079361  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:02.244320  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:02.579634  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:02.738250  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:03.079595  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:03.234023  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:03.579323  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:03.735634  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:04.080766  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:04.235367  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:04.579468  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:04.740269  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:05.080517  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:05.235394  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:05.578838  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:05.738322  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:06.103209  296772 kapi.go:107] duration metric: took 1m39.527490314s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1009 19:04:06.107256  296772 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-999657 cluster.
	I1009 19:04:06.110256  296772 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1009 19:04:06.113388  296772 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1009 19:04:06.235259  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:06.738130  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:07.234522  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:07.738577  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:08.234591  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:08.739771  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:09.234522  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:09.748860  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:10.235260  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:10.735027  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:11.235284  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:11.745271  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:12.234114  296772 kapi.go:107] duration metric: took 1m49.003349416s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1009 19:04:12.237486  296772 out.go:179] * Enabled addons: registry-creds, storage-provisioner, amd-gpu-device-plugin, ingress-dns, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1009 19:04:12.240510  296772 addons.go:514] duration metric: took 1m55.367572673s for enable addons: enabled=[registry-creds storage-provisioner amd-gpu-device-plugin ingress-dns nvidia-device-plugin cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1009 19:04:12.240575  296772 start.go:247] waiting for cluster config update ...
	I1009 19:04:12.240599  296772 start.go:256] writing updated cluster config ...
	I1009 19:04:12.240911  296772 ssh_runner.go:195] Run: rm -f paused
	I1009 19:04:12.245184  296772 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:04:12.334482  296772 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dm266" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:12.341009  296772 pod_ready.go:94] pod "coredns-66bc5c9577-dm266" is "Ready"
	I1009 19:04:12.341038  296772 pod_ready.go:86] duration metric: took 6.525573ms for pod "coredns-66bc5c9577-dm266" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:12.343630  296772 pod_ready.go:83] waiting for pod "etcd-addons-999657" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:12.348471  296772 pod_ready.go:94] pod "etcd-addons-999657" is "Ready"
	I1009 19:04:12.348502  296772 pod_ready.go:86] duration metric: took 4.798744ms for pod "etcd-addons-999657" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:12.350946  296772 pod_ready.go:83] waiting for pod "kube-apiserver-addons-999657" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:12.355462  296772 pod_ready.go:94] pod "kube-apiserver-addons-999657" is "Ready"
	I1009 19:04:12.355526  296772 pod_ready.go:86] duration metric: took 4.555221ms for pod "kube-apiserver-addons-999657" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:12.357777  296772 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-999657" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:12.649136  296772 pod_ready.go:94] pod "kube-controller-manager-addons-999657" is "Ready"
	I1009 19:04:12.649213  296772 pod_ready.go:86] duration metric: took 291.409172ms for pod "kube-controller-manager-addons-999657" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:12.849681  296772 pod_ready.go:83] waiting for pod "kube-proxy-jcwfl" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:13.248885  296772 pod_ready.go:94] pod "kube-proxy-jcwfl" is "Ready"
	I1009 19:04:13.248912  296772 pod_ready.go:86] duration metric: took 399.20345ms for pod "kube-proxy-jcwfl" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:13.449222  296772 pod_ready.go:83] waiting for pod "kube-scheduler-addons-999657" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:13.849961  296772 pod_ready.go:94] pod "kube-scheduler-addons-999657" is "Ready"
	I1009 19:04:13.849993  296772 pod_ready.go:86] duration metric: took 400.741013ms for pod "kube-scheduler-addons-999657" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:13.850007  296772 pod_ready.go:40] duration metric: took 1.604793616s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:04:13.910797  296772 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:04:13.913990  296772 out.go:179] * Done! kubectl is now configured to use "addons-999657" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:07:10 addons-999657 crio[832]: time="2025-10-09T19:07:10.589413265Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-cg7hg Namespace:default ID:045b437684196514c9c348337a0effc28bd806c2fe1750c34d6eee1e6b3ed0b0 UID:8f230458-bcd1-46e2-b78d-d2d28fc5ca4d NetNS:/var/run/netns/5ace65b9-c475-4a72-88cb-44a18d7f38f1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cb10}] Aliases:map[]}"
	Oct 09 19:07:10 addons-999657 crio[832]: time="2025-10-09T19:07:10.589617204Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-cg7hg to CNI network \"kindnet\" (type=ptp)"
	Oct 09 19:07:10 addons-999657 crio[832]: time="2025-10-09T19:07:10.613661957Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-cg7hg Namespace:default ID:045b437684196514c9c348337a0effc28bd806c2fe1750c34d6eee1e6b3ed0b0 UID:8f230458-bcd1-46e2-b78d-d2d28fc5ca4d NetNS:/var/run/netns/5ace65b9-c475-4a72-88cb-44a18d7f38f1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cb10}] Aliases:map[]}"
	Oct 09 19:07:10 addons-999657 crio[832]: time="2025-10-09T19:07:10.614074382Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-cg7hg for CNI network kindnet (type=ptp)"
	Oct 09 19:07:10 addons-999657 crio[832]: time="2025-10-09T19:07:10.62266854Z" level=info msg="Ran pod sandbox 045b437684196514c9c348337a0effc28bd806c2fe1750c34d6eee1e6b3ed0b0 with infra container: default/hello-world-app-5d498dc89-cg7hg/POD" id=814bed21-ef6b-49a2-8e5b-44131f9412ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:07:10 addons-999657 crio[832]: time="2025-10-09T19:07:10.627074977Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c3a80d17-19b1-45db-8a61-63854203eb90 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:07:10 addons-999657 crio[832]: time="2025-10-09T19:07:10.627320269Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=c3a80d17-19b1-45db-8a61-63854203eb90 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:07:10 addons-999657 crio[832]: time="2025-10-09T19:07:10.627421624Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=c3a80d17-19b1-45db-8a61-63854203eb90 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:07:10 addons-999657 crio[832]: time="2025-10-09T19:07:10.631365201Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=07ce7293-5880-44ba-a35c-1843282a3479 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:07:10 addons-999657 crio[832]: time="2025-10-09T19:07:10.635183228Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 09 19:07:10 addons-999657 crio[832]: time="2025-10-09T19:07:10.825721555Z" level=info msg="Removing container: c377f27bd967b6fc32ec70c69e96af5c17add6c9942e99807bd4da9cf04133e0" id=423fa670-61bc-4045-b3c3-fea51b923fe3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:07:10 addons-999657 crio[832]: time="2025-10-09T19:07:10.83548199Z" level=info msg="Error loading conmon cgroup of container c377f27bd967b6fc32ec70c69e96af5c17add6c9942e99807bd4da9cf04133e0: cgroup deleted" id=423fa670-61bc-4045-b3c3-fea51b923fe3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:07:10 addons-999657 crio[832]: time="2025-10-09T19:07:10.841298874Z" level=info msg="Removed container c377f27bd967b6fc32ec70c69e96af5c17add6c9942e99807bd4da9cf04133e0: kube-system/registry-creds-764b6fb674-gq9vn/registry-creds" id=423fa670-61bc-4045-b3c3-fea51b923fe3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:07:11 addons-999657 crio[832]: time="2025-10-09T19:07:11.303502707Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=07ce7293-5880-44ba-a35c-1843282a3479 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:07:11 addons-999657 crio[832]: time="2025-10-09T19:07:11.304241299Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=43899456-6be6-480a-abec-73d5111f2e53 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:07:11 addons-999657 crio[832]: time="2025-10-09T19:07:11.307024326Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f116059c-cf53-4b2a-96a8-21f7cf100837 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:07:11 addons-999657 crio[832]: time="2025-10-09T19:07:11.320656777Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-cg7hg/hello-world-app" id=268df51f-cd2b-4863-989f-6880d0c46eb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:07:11 addons-999657 crio[832]: time="2025-10-09T19:07:11.322265419Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:07:11 addons-999657 crio[832]: time="2025-10-09T19:07:11.329965438Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:07:11 addons-999657 crio[832]: time="2025-10-09T19:07:11.330335059Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0d62b4c36e1ddf6b4aa8fdf22c730422c42f35bf7cc8d9f5243e8a7b7d4a498d/merged/etc/passwd: no such file or directory"
	Oct 09 19:07:11 addons-999657 crio[832]: time="2025-10-09T19:07:11.330434715Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0d62b4c36e1ddf6b4aa8fdf22c730422c42f35bf7cc8d9f5243e8a7b7d4a498d/merged/etc/group: no such file or directory"
	Oct 09 19:07:11 addons-999657 crio[832]: time="2025-10-09T19:07:11.330769062Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:07:11 addons-999657 crio[832]: time="2025-10-09T19:07:11.364242425Z" level=info msg="Created container 14200746c48182c5f10eb93ddebbe36cb423ed2e519ced4570ba55c9a3340dce: default/hello-world-app-5d498dc89-cg7hg/hello-world-app" id=268df51f-cd2b-4863-989f-6880d0c46eb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:07:11 addons-999657 crio[832]: time="2025-10-09T19:07:11.368199458Z" level=info msg="Starting container: 14200746c48182c5f10eb93ddebbe36cb423ed2e519ced4570ba55c9a3340dce" id=f5df365a-de0e-409e-9ff7-d90d434a4489 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:07:11 addons-999657 crio[832]: time="2025-10-09T19:07:11.393295976Z" level=info msg="Started container" PID=7198 containerID=14200746c48182c5f10eb93ddebbe36cb423ed2e519ced4570ba55c9a3340dce description=default/hello-world-app-5d498dc89-cg7hg/hello-world-app id=f5df365a-de0e-409e-9ff7-d90d434a4489 name=/runtime.v1.RuntimeService/StartContainer sandboxID=045b437684196514c9c348337a0effc28bd806c2fe1750c34d6eee1e6b3ed0b0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	14200746c4818       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        1 second ago        Running             hello-world-app                          0                   045b437684196       hello-world-app-5d498dc89-cg7hg            default
	8a50683b993a5       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             2 seconds ago       Exited              registry-creds                           1                   94b792137066d       registry-creds-764b6fb674-gq9vn            kube-system
	f9e3b9ee842a0       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago       Running             nginx                                    0                   18631e8346ef3       nginx                                      default
	a993900b2baee       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago       Running             busybox                                  0                   bc13dece92467       busybox                                    default
	a1c43a64d2cf0       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             3 minutes ago       Running             controller                               0                   da6bc68cdeb44       ingress-nginx-controller-9cc49f96f-24gzc   ingress-nginx
	917ad92fcb15d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago       Running             gcp-auth                                 0                   70e3bee7204a5       gcp-auth-78565c9fb4-2hmqj                  gcp-auth
	50e1747ecacea       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago       Running             csi-snapshotter                          0                   f85504d407cce       csi-hostpathplugin-4b7rw                   kube-system
	3f0053d1e02ad       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago       Running             csi-provisioner                          0                   f85504d407cce       csi-hostpathplugin-4b7rw                   kube-system
	763d4ca0038d6       c67c707f59d87e1add5896e856d3ed36fbff2a778620f70d33b799e0541a77e3                                                                             3 minutes ago       Exited              patch                                    3                   09095f1cd8dc7       ingress-nginx-admission-patch-s9hrl        ingress-nginx
	4e9a584f93742       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago       Running             liveness-probe                           0                   f85504d407cce       csi-hostpathplugin-4b7rw                   kube-system
	f2087bf38944f       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago       Running             hostpath                                 0                   f85504d407cce       csi-hostpathplugin-4b7rw                   kube-system
	93ca74439d1e3       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago       Running             node-driver-registrar                    0                   f85504d407cce       csi-hostpathplugin-4b7rw                   kube-system
	b544dfbf81fb5       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago       Running             gadget                                   0                   5184138858d43       gadget-fh5x6                               gadget
	4011ef25cebcc       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago       Running             registry-proxy                           0                   ac1610714c97d       registry-proxy-q9p6k                       kube-system
	859a72eb5676e       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   63d6fe9d487fc       snapshot-controller-7d9fbc56b8-txqvb       kube-system
	bb893c39a97db       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago       Running             metrics-server                           0                   439e408f5be0c       metrics-server-85b7d694d7-qgbgn            kube-system
	a9b5e7a178bf7       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   cb80d74ddca30       snapshot-controller-7d9fbc56b8-jp7nw       kube-system
	60de2ccc28f1d       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago       Running             local-path-provisioner                   0                   b698a08011d68       local-path-provisioner-648f6765c9-mnq45    local-path-storage
	cdcd01c9f8f42       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago       Running             csi-attacher                             0                   f37d469fb0976       csi-hostpath-attacher-0                    kube-system
	39a52fb8859c2       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago       Running             csi-resizer                              0                   d309b7867c2b5       csi-hostpath-resizer-0                     kube-system
	c0031a36724df       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   3 minutes ago       Exited              create                                   0                   27f9afb10c2bb       ingress-nginx-admission-create-22c9r       ingress-nginx
	7b7dc9732ce4b       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           3 minutes ago       Running             registry                                 0                   5e085abbe31c7       registry-66898fdd98-d8jgl                  kube-system
	ec4db71d717dd       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago       Running             csi-external-health-monitor-controller   0                   f85504d407cce       csi-hostpathplugin-4b7rw                   kube-system
	f7e6d7b389c66       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago       Running             nvidia-device-plugin-ctr                 0                   48ccb92735f8f       nvidia-device-plugin-daemonset-4lmwx       kube-system
	f415e4df4a3cd       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago       Running             yakd                                     0                   421bca8afe63c       yakd-dashboard-5ff678cb9-vn427             yakd-dashboard
	9313a51d10845       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago       Running             cloud-spanner-emulator                   0                   420265a83e041       cloud-spanner-emulator-86bd5cbb97-qbxnd    default
	fbc396505d84e       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago       Running             minikube-ingress-dns                     0                   75a9985ea2fc1       kube-ingress-dns-minikube                  kube-system
	c8fc026ca1019       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago       Running             storage-provisioner                      0                   ffedf1ca53d34       storage-provisioner                        kube-system
	2823efa103e5e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago       Running             coredns                                  0                   e2005c57356ba       coredns-66bc5c9577-dm266                   kube-system
	532259f4c5926       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago       Running             kindnet-cni                              0                   93085f5c2d9d7       kindnet-rztm2                              kube-system
	d859645864356       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago       Running             kube-proxy                               0                   723a94b89d157       kube-proxy-jcwfl                           kube-system
	7fcbf1be4bdef       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago       Running             kube-scheduler                           0                   e2ee93c8b0fa8       kube-scheduler-addons-999657               kube-system
	09a19318421ae       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago       Running             kube-controller-manager                  0                   ee77e8fc8c408       kube-controller-manager-addons-999657      kube-system
	aaa0ded06ea4b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago       Running             etcd                                     0                   70b5692c7a7d3       etcd-addons-999657                         kube-system
	804d5a04697a7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago       Running             kube-apiserver                           0                   3db82d012a194       kube-apiserver-addons-999657               kube-system
	
	
	==> coredns [2823efa103e5ee38b792c909eaeee0c995e8a8302f5b0f522f6d786b3be0e7ba] <==
	[INFO] 10.244.0.18:57834 - 43245 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002002156s
	[INFO] 10.244.0.18:57834 - 53094 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000212977s
	[INFO] 10.244.0.18:57834 - 51411 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00034929s
	[INFO] 10.244.0.18:58324 - 10750 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00014711s
	[INFO] 10.244.0.18:58324 - 10280 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000102821s
	[INFO] 10.244.0.18:45191 - 43042 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091515s
	[INFO] 10.244.0.18:45191 - 42853 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074786s
	[INFO] 10.244.0.18:42364 - 35227 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085246s
	[INFO] 10.244.0.18:42364 - 34997 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000154626s
	[INFO] 10.244.0.18:35884 - 24363 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001292377s
	[INFO] 10.244.0.18:35884 - 24183 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001346118s
	[INFO] 10.244.0.18:44992 - 39097 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127517s
	[INFO] 10.244.0.18:44992 - 39310 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000151499s
	[INFO] 10.244.0.20:36500 - 25965 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000191407s
	[INFO] 10.244.0.20:57757 - 41543 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000259457s
	[INFO] 10.244.0.20:55331 - 13630 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00016302s
	[INFO] 10.244.0.20:57339 - 9710 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000095068s
	[INFO] 10.244.0.20:34063 - 59055 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000083688s
	[INFO] 10.244.0.20:33957 - 46577 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000103429s
	[INFO] 10.244.0.20:35186 - 25859 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002260936s
	[INFO] 10.244.0.20:38636 - 36177 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001712181s
	[INFO] 10.244.0.20:44409 - 26902 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002625323s
	[INFO] 10.244.0.20:58202 - 9238 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001881387s
	[INFO] 10.244.0.23:45132 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000173144s
	[INFO] 10.244.0.23:39839 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000153535s
	
	
	==> describe nodes <==
	Name:               addons-999657
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-999657
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=addons-999657
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_02_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-999657
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-999657"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:02:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-999657
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:07:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:07:08 +0000   Thu, 09 Oct 2025 19:02:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:07:08 +0000   Thu, 09 Oct 2025 19:02:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:07:08 +0000   Thu, 09 Oct 2025 19:02:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:07:08 +0000   Thu, 09 Oct 2025 19:02:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-999657
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 36212d5d1b0b470f9e6023029f3833c7
	  System UUID:                0cc2ca92-1fed-42ee-b02f-8480b3bcd288
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  default                     cloud-spanner-emulator-86bd5cbb97-qbxnd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  default                     hello-world-app-5d498dc89-cg7hg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  gadget                      gadget-fh5x6                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  gcp-auth                    gcp-auth-78565c9fb4-2hmqj                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-24gzc    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m49s
	  kube-system                 coredns-66bc5c9577-dm266                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m55s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 csi-hostpathplugin-4b7rw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 etcd-addons-999657                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m
	  kube-system                 kindnet-rztm2                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m56s
	  kube-system                 kube-apiserver-addons-999657                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-controller-manager-addons-999657       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 kube-proxy-jcwfl                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-scheduler-addons-999657                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 metrics-server-85b7d694d7-qgbgn             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m51s
	  kube-system                 nvidia-device-plugin-daemonset-4lmwx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 registry-66898fdd98-d8jgl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 registry-creds-764b6fb674-gq9vn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 registry-proxy-q9p6k                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 snapshot-controller-7d9fbc56b8-jp7nw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 snapshot-controller-7d9fbc56b8-txqvb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  local-path-storage          local-path-provisioner-648f6765c9-mnq45     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-vn427              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m54s  kube-proxy       
	  Normal   Starting                 5m     kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m     kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m     kubelet          Node addons-999657 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m     kubelet          Node addons-999657 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m     kubelet          Node addons-999657 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m56s  node-controller  Node addons-999657 event: Registered Node addons-999657 in Controller
	  Normal   NodeReady                4m15s  kubelet          Node addons-999657 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015195] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.531968] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036847] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.757016] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.932356] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 9 18:02] hrtimer: interrupt took 20603549 ns
	[Oct 9 18:59] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 9 19:02] overlayfs: idmapped layers are currently not supported
	[  +0.066862] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [aaa0ded06ea4b321c4f9a079d4cf69d526ba351445ed008be4734d67b7ea8524] <==
	{"level":"warn","ts":"2025-10-09T19:02:07.547782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.569989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.596665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.625361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.653260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.679176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.699581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.729725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.754157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.787289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.808114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.838184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.902912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.922704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.960010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.992842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:08.022206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:08.056940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:08.182941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:24.009382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:24.029176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:46.025604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:46.034831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:46.073204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:46.084419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46168","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [917ad92fcb15d86bdc24453a35ca0c69c4ecc2ec023a6d6a2473c909f6ec3660] <==
	2025/10/09 19:04:05 GCP Auth Webhook started!
	2025/10/09 19:04:14 Ready to marshal response ...
	2025/10/09 19:04:14 Ready to write response ...
	2025/10/09 19:04:14 Ready to marshal response ...
	2025/10/09 19:04:14 Ready to write response ...
	2025/10/09 19:04:14 Ready to marshal response ...
	2025/10/09 19:04:14 Ready to write response ...
	2025/10/09 19:04:35 Ready to marshal response ...
	2025/10/09 19:04:35 Ready to write response ...
	2025/10/09 19:04:39 Ready to marshal response ...
	2025/10/09 19:04:39 Ready to write response ...
	2025/10/09 19:04:39 Ready to marshal response ...
	2025/10/09 19:04:39 Ready to write response ...
	2025/10/09 19:04:47 Ready to marshal response ...
	2025/10/09 19:04:47 Ready to write response ...
	2025/10/09 19:04:52 Ready to marshal response ...
	2025/10/09 19:04:52 Ready to write response ...
	2025/10/09 19:05:05 Ready to marshal response ...
	2025/10/09 19:05:05 Ready to write response ...
	2025/10/09 19:05:34 Ready to marshal response ...
	2025/10/09 19:05:34 Ready to write response ...
	2025/10/09 19:07:10 Ready to marshal response ...
	2025/10/09 19:07:10 Ready to write response ...
	
	
	==> kernel <==
	 19:07:12 up  1:49,  0 user,  load average: 1.22, 1.95, 2.81
	Linux addons-999657 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [532259f4c5926820e3e18f689f80a1bc102631a6a0a05374223820ef91ec414f] <==
	I1009 19:05:07.922494       1 main.go:301] handling current node
	I1009 19:05:17.922294       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:05:17.922323       1 main.go:301] handling current node
	I1009 19:05:27.922850       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:05:27.922915       1 main.go:301] handling current node
	I1009 19:05:37.922636       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:05:37.922675       1 main.go:301] handling current node
	I1009 19:05:47.922108       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:05:47.922146       1 main.go:301] handling current node
	I1009 19:05:57.922837       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:05:57.922871       1 main.go:301] handling current node
	I1009 19:06:07.922872       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:06:07.922908       1 main.go:301] handling current node
	I1009 19:06:17.922287       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:06:17.922395       1 main.go:301] handling current node
	I1009 19:06:27.922704       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:06:27.922767       1 main.go:301] handling current node
	I1009 19:06:37.922845       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:06:37.922877       1 main.go:301] handling current node
	I1009 19:06:47.922087       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:06:47.922198       1 main.go:301] handling current node
	I1009 19:06:57.922402       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:06:57.922527       1 main.go:301] handling current node
	I1009 19:07:07.922540       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:07:07.922580       1 main.go:301] handling current node
	
	
	==> kube-apiserver [804d5a04697a7b5835636d98ba88b94561d9699443a8eadbbe90fd28d0b160cb] <==
	W1009 19:03:22.815739       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 19:03:22.815786       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1009 19:03:22.815800       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1009 19:03:22.816966       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 19:03:22.817038       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1009 19:03:22.817052       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1009 19:03:55.917878       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.231.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.231.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.231.99:443: connect: connection refused" logger="UnhandledError"
	W1009 19:03:55.917964       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 19:03:55.918028       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1009 19:03:55.918724       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.231.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.231.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.231.99:443: connect: connection refused" logger="UnhandledError"
	E1009 19:03:55.924450       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.231.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.231.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.231.99:443: connect: connection refused" logger="UnhandledError"
	I1009 19:03:56.023610       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1009 19:04:23.881853       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46938: use of closed network connection
	E1009 19:04:24.125235       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46962: use of closed network connection
	E1009 19:04:24.264253       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46966: use of closed network connection
	I1009 19:04:51.909678       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1009 19:04:52.373918       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.42.213"}
	I1009 19:05:17.378147       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1009 19:07:10.457662       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.174.218"}
	
	
	==> kube-controller-manager [09a19318421aec51d9c6d371040aa4863198795c9d616ed4df00c26edc18b036] <==
	I1009 19:02:16.049698       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:02:16.049758       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1009 19:02:16.050030       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1009 19:02:16.050092       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 19:02:16.050344       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 19:02:16.050683       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1009 19:02:16.052406       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1009 19:02:16.053658       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 19:02:16.053672       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 19:02:16.053682       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 19:02:16.060064       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 19:02:16.061501       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1009 19:02:21.935296       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1009 19:02:46.012038       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 19:02:46.012214       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1009 19:02:46.012261       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1009 19:02:46.055660       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1009 19:02:46.060526       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1009 19:02:46.112881       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:02:46.161478       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:03:01.064904       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1009 19:03:16.119184       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 19:03:16.170842       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1009 19:03:46.127553       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 19:03:46.183088       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [d85964586435683cdf29db9f8d8e0fd3637c91ff01ef302bb910a1397cf75b01] <==
	I1009 19:02:17.961961       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:02:18.083211       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:02:18.194696       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:02:18.194734       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1009 19:02:18.194810       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:02:18.223149       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:02:18.223200       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:02:18.272059       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:02:18.272385       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:02:18.272404       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:02:18.278326       1 config.go:200] "Starting service config controller"
	I1009 19:02:18.278342       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:02:18.278359       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:02:18.278363       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:02:18.278373       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:02:18.278377       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:02:18.279145       1 config.go:309] "Starting node config controller"
	I1009 19:02:18.279153       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:02:18.279159       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:02:18.379484       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:02:18.379533       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:02:18.379569       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7fcbf1be4bdef0161b5efca4fb661fd9a8fddc41f80f6974b49d4a8bb8d17634] <==
	I1009 19:02:09.523247       1 serving.go:386] Generated self-signed cert in-memory
	W1009 19:02:10.836308       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 19:02:10.836922       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 19:02:10.836990       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 19:02:10.837022       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 19:02:10.857955       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 19:02:10.858054       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:02:10.861877       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 19:02:10.862687       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:02:10.862764       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:02:10.862820       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:02:10.963174       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:05:42 addons-999657 kubelet[1301]: I1009 19:05:42.466199    1301 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-1aa99f45-27b9-4455-be43-5d61c0332a6c" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^ee61bd3e-a542-11f0-b6b3-ba06b72e2aba") on node "addons-999657"
	Oct 09 19:05:42 addons-999657 kubelet[1301]: I1009 19:05:42.496958    1301 scope.go:117] "RemoveContainer" containerID="0df3cbb6414523f54e79c0b9b1fbc38be4e39b8ab2c4f1f6e939707b215173b2"
	Oct 09 19:05:42 addons-999657 kubelet[1301]: I1009 19:05:42.506868    1301 scope.go:117] "RemoveContainer" containerID="0df3cbb6414523f54e79c0b9b1fbc38be4e39b8ab2c4f1f6e939707b215173b2"
	Oct 09 19:05:42 addons-999657 kubelet[1301]: E1009 19:05:42.507339    1301 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0df3cbb6414523f54e79c0b9b1fbc38be4e39b8ab2c4f1f6e939707b215173b2\": container with ID starting with 0df3cbb6414523f54e79c0b9b1fbc38be4e39b8ab2c4f1f6e939707b215173b2 not found: ID does not exist" containerID="0df3cbb6414523f54e79c0b9b1fbc38be4e39b8ab2c4f1f6e939707b215173b2"
	Oct 09 19:05:42 addons-999657 kubelet[1301]: I1009 19:05:42.507644    1301 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0df3cbb6414523f54e79c0b9b1fbc38be4e39b8ab2c4f1f6e939707b215173b2"} err="failed to get container status \"0df3cbb6414523f54e79c0b9b1fbc38be4e39b8ab2c4f1f6e939707b215173b2\": rpc error: code = NotFound desc = could not find container \"0df3cbb6414523f54e79c0b9b1fbc38be4e39b8ab2c4f1f6e939707b215173b2\": container with ID starting with 0df3cbb6414523f54e79c0b9b1fbc38be4e39b8ab2c4f1f6e939707b215173b2 not found: ID does not exist"
	Oct 09 19:05:42 addons-999657 kubelet[1301]: I1009 19:05:42.565487    1301 reconciler_common.go:299] "Volume detached for volume \"pvc-1aa99f45-27b9-4455-be43-5d61c0332a6c\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^ee61bd3e-a542-11f0-b6b3-ba06b72e2aba\") on node \"addons-999657\" DevicePath \"\""
	Oct 09 19:05:44 addons-999657 kubelet[1301]: I1009 19:05:44.165028    1301 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e094481-8df7-40a9-b685-47bec7e7ac95" path="/var/lib/kubelet/pods/9e094481-8df7-40a9-b685-47bec7e7ac95/volumes"
	Oct 09 19:05:53 addons-999657 kubelet[1301]: I1009 19:05:53.161722    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-q9p6k" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 19:06:04 addons-999657 kubelet[1301]: I1009 19:06:04.162091    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-d8jgl" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 19:06:11 addons-999657 kubelet[1301]: I1009 19:06:11.162447    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-4lmwx" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 19:06:12 addons-999657 kubelet[1301]: E1009 19:06:12.263447    1301 manager.go:1116] Failed to create existing container: /docker/ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd/crio-d857d02925c1002e02d975c401f1225f27a0260e744cab5d5441818afcd0e6b9: Error finding container d857d02925c1002e02d975c401f1225f27a0260e744cab5d5441818afcd0e6b9: Status 404 returned error can't find the container with id d857d02925c1002e02d975c401f1225f27a0260e744cab5d5441818afcd0e6b9
	Oct 09 19:06:12 addons-999657 kubelet[1301]: E1009 19:06:12.265720    1301 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8615a907be1f7098781fccafb430aec1a356b78564fe616fc87e32ea6aebfc73/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8615a907be1f7098781fccafb430aec1a356b78564fe616fc87e32ea6aebfc73/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 19:07:08 addons-999657 kubelet[1301]: I1009 19:07:08.161721    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-gq9vn" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 19:07:09 addons-999657 kubelet[1301]: I1009 19:07:09.811733    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-gq9vn" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 19:07:09 addons-999657 kubelet[1301]: I1009 19:07:09.811807    1301 scope.go:117] "RemoveContainer" containerID="c377f27bd967b6fc32ec70c69e96af5c17add6c9942e99807bd4da9cf04133e0"
	Oct 09 19:07:10 addons-999657 kubelet[1301]: I1009 19:07:10.400527    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcxnt\" (UniqueName: \"kubernetes.io/projected/8f230458-bcd1-46e2-b78d-d2d28fc5ca4d-kube-api-access-tcxnt\") pod \"hello-world-app-5d498dc89-cg7hg\" (UID: \"8f230458-bcd1-46e2-b78d-d2d28fc5ca4d\") " pod="default/hello-world-app-5d498dc89-cg7hg"
	Oct 09 19:07:10 addons-999657 kubelet[1301]: I1009 19:07:10.400591    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8f230458-bcd1-46e2-b78d-d2d28fc5ca4d-gcp-creds\") pod \"hello-world-app-5d498dc89-cg7hg\" (UID: \"8f230458-bcd1-46e2-b78d-d2d28fc5ca4d\") " pod="default/hello-world-app-5d498dc89-cg7hg"
	Oct 09 19:07:10 addons-999657 kubelet[1301]: W1009 19:07:10.625360    1301 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd/crio-045b437684196514c9c348337a0effc28bd806c2fe1750c34d6eee1e6b3ed0b0 WatchSource:0}: Error finding container 045b437684196514c9c348337a0effc28bd806c2fe1750c34d6eee1e6b3ed0b0: Status 404 returned error can't find the container with id 045b437684196514c9c348337a0effc28bd806c2fe1750c34d6eee1e6b3ed0b0
	Oct 09 19:07:10 addons-999657 kubelet[1301]: I1009 19:07:10.818488    1301 scope.go:117] "RemoveContainer" containerID="c377f27bd967b6fc32ec70c69e96af5c17add6c9942e99807bd4da9cf04133e0"
	Oct 09 19:07:10 addons-999657 kubelet[1301]: I1009 19:07:10.819501    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-gq9vn" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 19:07:10 addons-999657 kubelet[1301]: I1009 19:07:10.819561    1301 scope.go:117] "RemoveContainer" containerID="8a50683b993a5a8a7263db17d927ed8f468527be018fc16da71d31a4e1d8fee9"
	Oct 09 19:07:10 addons-999657 kubelet[1301]: E1009 19:07:10.819720    1301 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-gq9vn_kube-system(bbaa910d-1ec1-4260-9cf0-961ed5abd1c8)\"" pod="kube-system/registry-creds-764b6fb674-gq9vn" podUID="bbaa910d-1ec1-4260-9cf0-961ed5abd1c8"
	Oct 09 19:07:11 addons-999657 kubelet[1301]: I1009 19:07:11.832814    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-gq9vn" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 19:07:11 addons-999657 kubelet[1301]: I1009 19:07:11.832964    1301 scope.go:117] "RemoveContainer" containerID="8a50683b993a5a8a7263db17d927ed8f468527be018fc16da71d31a4e1d8fee9"
	Oct 09 19:07:11 addons-999657 kubelet[1301]: E1009 19:07:11.836527    1301 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-gq9vn_kube-system(bbaa910d-1ec1-4260-9cf0-961ed5abd1c8)\"" pod="kube-system/registry-creds-764b6fb674-gq9vn" podUID="bbaa910d-1ec1-4260-9cf0-961ed5abd1c8"
	
	
	==> storage-provisioner [c8fc026ca1019d8a0f4406f6cc4f8f68a03b36d38c792e26f09d7d78bf7ea9e3] <==
	W1009 19:06:48.664500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:06:50.668047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:06:50.672841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:06:52.675625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:06:52.680574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:06:54.683826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:06:54.689262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:06:56.693586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:06:56.698515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:06:58.703576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:06:58.717357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:07:00.721330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:07:00.728387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:07:02.731627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:07:02.741977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:07:04.746419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:07:04.753514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:07:06.757695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:07:06.764812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:07:08.767884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:07:08.780918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:07:10.784846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:07:10.792485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:07:12.795993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:07:12.804330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-999657 -n addons-999657
helpers_test.go:269: (dbg) Run:  kubectl --context addons-999657 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-22c9r ingress-nginx-admission-patch-s9hrl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-999657 describe pod ingress-nginx-admission-create-22c9r ingress-nginx-admission-patch-s9hrl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-999657 describe pod ingress-nginx-admission-create-22c9r ingress-nginx-admission-patch-s9hrl: exit status 1 (99.370843ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-22c9r" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-s9hrl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-999657 describe pod ingress-nginx-admission-create-22c9r ingress-nginx-admission-patch-s9hrl: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-999657 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (276.264601ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:07:13.868875  306493 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:07:13.869690  306493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:07:13.869708  306493 out.go:374] Setting ErrFile to fd 2...
	I1009 19:07:13.869714  306493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:07:13.870029  306493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:07:13.870391  306493 mustload.go:65] Loading cluster: addons-999657
	I1009 19:07:13.870808  306493 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:07:13.870828  306493 addons.go:606] checking whether the cluster is paused
	I1009 19:07:13.870963  306493 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:07:13.870980  306493 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:07:13.871461  306493 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:07:13.890694  306493 ssh_runner.go:195] Run: systemctl --version
	I1009 19:07:13.890771  306493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:07:13.911477  306493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:07:14.016171  306493 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:07:14.016262  306493 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:07:14.050861  306493 cri.go:89] found id: "8a50683b993a5a8a7263db17d927ed8f468527be018fc16da71d31a4e1d8fee9"
	I1009 19:07:14.050899  306493 cri.go:89] found id: "50e1747ecacea77a5f93e1d46aa99c4cb1fbce08f8f2f546154db1f9be02c796"
	I1009 19:07:14.050904  306493 cri.go:89] found id: "3f0053d1e02ad19ca3e128caaadebe9c4c0975ba7cb365a25e3c6d52870a17f0"
	I1009 19:07:14.050909  306493 cri.go:89] found id: "4e9a584f93742f2382e823963bed9b224f3ccba7c95660eb1dbca3c6b9908b3f"
	I1009 19:07:14.050912  306493 cri.go:89] found id: "f2087bf38944fc739afaac1113f222793a13154e659f060c4cdece3a7fa73071"
	I1009 19:07:14.050916  306493 cri.go:89] found id: "93ca74439d1e37925a704ede2fce384f9f9489c7b96ead6d63884128a4d9b0d1"
	I1009 19:07:14.050919  306493 cri.go:89] found id: "4011ef25cebccd0072dd10b711fedb8be54cb74589db33f0e4a5e667873eed44"
	I1009 19:07:14.050922  306493 cri.go:89] found id: "859a72eb5676e02ccdbfc8116afe2d9c4f2283fc97f6e130eb77ba45fe1f2ddf"
	I1009 19:07:14.050943  306493 cri.go:89] found id: "bb893c39a97db27f01be58b5eec66390173c64aa6dbf5fcc501e526bd34e4f74"
	I1009 19:07:14.050953  306493 cri.go:89] found id: "a9b5e7a178bf7423b4d23385f8409bd2da8f1ec9e312f7a1c786a7b9f1ec78fe"
	I1009 19:07:14.050960  306493 cri.go:89] found id: "cdcd01c9f8f4271bde354d676a0d7b97cf89b90bcce19fbab3de17f21aebb44c"
	I1009 19:07:14.050963  306493 cri.go:89] found id: "39a52fb8859c2040cedab3dbdc0662ae79f7d3abba463258d2c504bf8830448b"
	I1009 19:07:14.050967  306493 cri.go:89] found id: "7b7dc9732ce4b2127334e1e0c5b92a0ae3fbb0d316e98281fb8f8e8269c4b998"
	I1009 19:07:14.050970  306493 cri.go:89] found id: "ec4db71d717ddecd989934884e42ed0846d635333858d9d800181bfa0530c564"
	I1009 19:07:14.050974  306493 cri.go:89] found id: "f7e6d7b389c66f70de1f9dfa7a02589e922c296513ed0a3835867069f4fa9db8"
	I1009 19:07:14.050991  306493 cri.go:89] found id: "fbc396505d84e35ea37081e17222da9738c8ff9edd4bf2e014fdb1bf99f6de56"
	I1009 19:07:14.051000  306493 cri.go:89] found id: "c8fc026ca1019d8a0f4406f6cc4f8f68a03b36d38c792e26f09d7d78bf7ea9e3"
	I1009 19:07:14.051017  306493 cri.go:89] found id: "2823efa103e5ee38b792c909eaeee0c995e8a8302f5b0f522f6d786b3be0e7ba"
	I1009 19:07:14.051023  306493 cri.go:89] found id: "532259f4c5926820e3e18f689f80a1bc102631a6a0a05374223820ef91ec414f"
	I1009 19:07:14.051026  306493 cri.go:89] found id: "d85964586435683cdf29db9f8d8e0fd3637c91ff01ef302bb910a1397cf75b01"
	I1009 19:07:14.051031  306493 cri.go:89] found id: "7fcbf1be4bdef0161b5efca4fb661fd9a8fddc41f80f6974b49d4a8bb8d17634"
	I1009 19:07:14.051035  306493 cri.go:89] found id: "09a19318421aec51d9c6d371040aa4863198795c9d616ed4df00c26edc18b036"
	I1009 19:07:14.051062  306493 cri.go:89] found id: "aaa0ded06ea4b321c4f9a079d4cf69d526ba351445ed008be4734d67b7ea8524"
	I1009 19:07:14.051073  306493 cri.go:89] found id: "804d5a04697a7b5835636d98ba88b94561d9699443a8eadbbe90fd28d0b160cb"
	I1009 19:07:14.051081  306493 cri.go:89] found id: ""
	I1009 19:07:14.051175  306493 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:07:14.067130  306493 out.go:203] 
	W1009 19:07:14.070296  306493 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:07:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:07:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:07:14.070324  306493 out.go:285] * 
	* 
	W1009 19:07:14.075338  306493 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:07:14.078236  306493 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-999657 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-999657 addons disable ingress --alsologtostderr -v=1: exit status 11 (261.114273ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:07:14.138663  306538 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:07:14.139407  306538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:07:14.139421  306538 out.go:374] Setting ErrFile to fd 2...
	I1009 19:07:14.139427  306538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:07:14.139725  306538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:07:14.140067  306538 mustload.go:65] Loading cluster: addons-999657
	I1009 19:07:14.140461  306538 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:07:14.140480  306538 addons.go:606] checking whether the cluster is paused
	I1009 19:07:14.140584  306538 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:07:14.140600  306538 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:07:14.141039  306538 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:07:14.158978  306538 ssh_runner.go:195] Run: systemctl --version
	I1009 19:07:14.159044  306538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:07:14.178211  306538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:07:14.283897  306538 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:07:14.283991  306538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:07:14.313466  306538 cri.go:89] found id: "8a50683b993a5a8a7263db17d927ed8f468527be018fc16da71d31a4e1d8fee9"
	I1009 19:07:14.313490  306538 cri.go:89] found id: "50e1747ecacea77a5f93e1d46aa99c4cb1fbce08f8f2f546154db1f9be02c796"
	I1009 19:07:14.313505  306538 cri.go:89] found id: "3f0053d1e02ad19ca3e128caaadebe9c4c0975ba7cb365a25e3c6d52870a17f0"
	I1009 19:07:14.313510  306538 cri.go:89] found id: "4e9a584f93742f2382e823963bed9b224f3ccba7c95660eb1dbca3c6b9908b3f"
	I1009 19:07:14.313513  306538 cri.go:89] found id: "f2087bf38944fc739afaac1113f222793a13154e659f060c4cdece3a7fa73071"
	I1009 19:07:14.313518  306538 cri.go:89] found id: "93ca74439d1e37925a704ede2fce384f9f9489c7b96ead6d63884128a4d9b0d1"
	I1009 19:07:14.313522  306538 cri.go:89] found id: "4011ef25cebccd0072dd10b711fedb8be54cb74589db33f0e4a5e667873eed44"
	I1009 19:07:14.313525  306538 cri.go:89] found id: "859a72eb5676e02ccdbfc8116afe2d9c4f2283fc97f6e130eb77ba45fe1f2ddf"
	I1009 19:07:14.313529  306538 cri.go:89] found id: "bb893c39a97db27f01be58b5eec66390173c64aa6dbf5fcc501e526bd34e4f74"
	I1009 19:07:14.313535  306538 cri.go:89] found id: "a9b5e7a178bf7423b4d23385f8409bd2da8f1ec9e312f7a1c786a7b9f1ec78fe"
	I1009 19:07:14.313539  306538 cri.go:89] found id: "cdcd01c9f8f4271bde354d676a0d7b97cf89b90bcce19fbab3de17f21aebb44c"
	I1009 19:07:14.313542  306538 cri.go:89] found id: "39a52fb8859c2040cedab3dbdc0662ae79f7d3abba463258d2c504bf8830448b"
	I1009 19:07:14.313545  306538 cri.go:89] found id: "7b7dc9732ce4b2127334e1e0c5b92a0ae3fbb0d316e98281fb8f8e8269c4b998"
	I1009 19:07:14.313548  306538 cri.go:89] found id: "ec4db71d717ddecd989934884e42ed0846d635333858d9d800181bfa0530c564"
	I1009 19:07:14.313552  306538 cri.go:89] found id: "f7e6d7b389c66f70de1f9dfa7a02589e922c296513ed0a3835867069f4fa9db8"
	I1009 19:07:14.313557  306538 cri.go:89] found id: "fbc396505d84e35ea37081e17222da9738c8ff9edd4bf2e014fdb1bf99f6de56"
	I1009 19:07:14.313566  306538 cri.go:89] found id: "c8fc026ca1019d8a0f4406f6cc4f8f68a03b36d38c792e26f09d7d78bf7ea9e3"
	I1009 19:07:14.313570  306538 cri.go:89] found id: "2823efa103e5ee38b792c909eaeee0c995e8a8302f5b0f522f6d786b3be0e7ba"
	I1009 19:07:14.313573  306538 cri.go:89] found id: "532259f4c5926820e3e18f689f80a1bc102631a6a0a05374223820ef91ec414f"
	I1009 19:07:14.313576  306538 cri.go:89] found id: "d85964586435683cdf29db9f8d8e0fd3637c91ff01ef302bb910a1397cf75b01"
	I1009 19:07:14.313580  306538 cri.go:89] found id: "7fcbf1be4bdef0161b5efca4fb661fd9a8fddc41f80f6974b49d4a8bb8d17634"
	I1009 19:07:14.313584  306538 cri.go:89] found id: "09a19318421aec51d9c6d371040aa4863198795c9d616ed4df00c26edc18b036"
	I1009 19:07:14.313587  306538 cri.go:89] found id: "aaa0ded06ea4b321c4f9a079d4cf69d526ba351445ed008be4734d67b7ea8524"
	I1009 19:07:14.313590  306538 cri.go:89] found id: "804d5a04697a7b5835636d98ba88b94561d9699443a8eadbbe90fd28d0b160cb"
	I1009 19:07:14.313593  306538 cri.go:89] found id: ""
	I1009 19:07:14.313651  306538 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:07:14.328740  306538 out.go:203] 
	W1009 19:07:14.331719  306538 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:07:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:07:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:07:14.331743  306538 out.go:285] * 
	* 
	W1009 19:07:14.336709  306538 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:07:14.339728  306538 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-999657 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (142.75s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-fh5x6" [ddfd0834-9011-4d64-aa0a-5d06cb8ad39c] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003246804s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-999657 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (254.778889ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:05:49.513994  305375 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:05:49.515028  305375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:05:49.515044  305375 out.go:374] Setting ErrFile to fd 2...
	I1009 19:05:49.515050  305375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:05:49.515380  305375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:05:49.515770  305375 mustload.go:65] Loading cluster: addons-999657
	I1009 19:05:49.516196  305375 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:05:49.516217  305375 addons.go:606] checking whether the cluster is paused
	I1009 19:05:49.516366  305375 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:05:49.516385  305375 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:05:49.516887  305375 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:05:49.537836  305375 ssh_runner.go:195] Run: systemctl --version
	I1009 19:05:49.537910  305375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:05:49.557422  305375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:05:49.660615  305375 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:05:49.660726  305375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:05:49.692114  305375 cri.go:89] found id: "50e1747ecacea77a5f93e1d46aa99c4cb1fbce08f8f2f546154db1f9be02c796"
	I1009 19:05:49.692140  305375 cri.go:89] found id: "3f0053d1e02ad19ca3e128caaadebe9c4c0975ba7cb365a25e3c6d52870a17f0"
	I1009 19:05:49.692145  305375 cri.go:89] found id: "4e9a584f93742f2382e823963bed9b224f3ccba7c95660eb1dbca3c6b9908b3f"
	I1009 19:05:49.692150  305375 cri.go:89] found id: "f2087bf38944fc739afaac1113f222793a13154e659f060c4cdece3a7fa73071"
	I1009 19:05:49.692159  305375 cri.go:89] found id: "93ca74439d1e37925a704ede2fce384f9f9489c7b96ead6d63884128a4d9b0d1"
	I1009 19:05:49.692163  305375 cri.go:89] found id: "4011ef25cebccd0072dd10b711fedb8be54cb74589db33f0e4a5e667873eed44"
	I1009 19:05:49.692195  305375 cri.go:89] found id: "859a72eb5676e02ccdbfc8116afe2d9c4f2283fc97f6e130eb77ba45fe1f2ddf"
	I1009 19:05:49.692199  305375 cri.go:89] found id: "bb893c39a97db27f01be58b5eec66390173c64aa6dbf5fcc501e526bd34e4f74"
	I1009 19:05:49.692202  305375 cri.go:89] found id: "a9b5e7a178bf7423b4d23385f8409bd2da8f1ec9e312f7a1c786a7b9f1ec78fe"
	I1009 19:05:49.692214  305375 cri.go:89] found id: "cdcd01c9f8f4271bde354d676a0d7b97cf89b90bcce19fbab3de17f21aebb44c"
	I1009 19:05:49.692220  305375 cri.go:89] found id: "39a52fb8859c2040cedab3dbdc0662ae79f7d3abba463258d2c504bf8830448b"
	I1009 19:05:49.692224  305375 cri.go:89] found id: "7b7dc9732ce4b2127334e1e0c5b92a0ae3fbb0d316e98281fb8f8e8269c4b998"
	I1009 19:05:49.692230  305375 cri.go:89] found id: "ec4db71d717ddecd989934884e42ed0846d635333858d9d800181bfa0530c564"
	I1009 19:05:49.692233  305375 cri.go:89] found id: "f7e6d7b389c66f70de1f9dfa7a02589e922c296513ed0a3835867069f4fa9db8"
	I1009 19:05:49.692237  305375 cri.go:89] found id: "fbc396505d84e35ea37081e17222da9738c8ff9edd4bf2e014fdb1bf99f6de56"
	I1009 19:05:49.692245  305375 cri.go:89] found id: "c8fc026ca1019d8a0f4406f6cc4f8f68a03b36d38c792e26f09d7d78bf7ea9e3"
	I1009 19:05:49.692266  305375 cri.go:89] found id: "2823efa103e5ee38b792c909eaeee0c995e8a8302f5b0f522f6d786b3be0e7ba"
	I1009 19:05:49.692282  305375 cri.go:89] found id: "532259f4c5926820e3e18f689f80a1bc102631a6a0a05374223820ef91ec414f"
	I1009 19:05:49.692289  305375 cri.go:89] found id: "d85964586435683cdf29db9f8d8e0fd3637c91ff01ef302bb910a1397cf75b01"
	I1009 19:05:49.692293  305375 cri.go:89] found id: "7fcbf1be4bdef0161b5efca4fb661fd9a8fddc41f80f6974b49d4a8bb8d17634"
	I1009 19:05:49.692299  305375 cri.go:89] found id: "09a19318421aec51d9c6d371040aa4863198795c9d616ed4df00c26edc18b036"
	I1009 19:05:49.692310  305375 cri.go:89] found id: "aaa0ded06ea4b321c4f9a079d4cf69d526ba351445ed008be4734d67b7ea8524"
	I1009 19:05:49.692313  305375 cri.go:89] found id: "804d5a04697a7b5835636d98ba88b94561d9699443a8eadbbe90fd28d0b160cb"
	I1009 19:05:49.692320  305375 cri.go:89] found id: ""
	I1009 19:05:49.692396  305375 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:05:49.707057  305375 out.go:203] 
	W1009 19:05:49.708304  305375 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:05:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:05:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:05:49.708325  305375 out.go:285] * 
	* 
	W1009 19:05:49.713335  305375 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:05:49.714766  305375 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-999657 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.717566ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-qgbgn" [1b9f013c-1ebf-4d60-b677-f20de508376a] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003900878s
addons_test.go:463: (dbg) Run:  kubectl --context addons-999657 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-999657 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (281.628756ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:04:51.363054  304187 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:04:51.365885  304187 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:51.365906  304187 out.go:374] Setting ErrFile to fd 2...
	I1009 19:04:51.365913  304187 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:51.366265  304187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:04:51.366591  304187 mustload.go:65] Loading cluster: addons-999657
	I1009 19:04:51.367007  304187 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:51.367020  304187 addons.go:606] checking whether the cluster is paused
	I1009 19:04:51.367121  304187 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:51.367130  304187 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:04:51.367578  304187 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:04:51.402838  304187 ssh_runner.go:195] Run: systemctl --version
	I1009 19:04:51.402895  304187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:04:51.424165  304187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:04:51.531799  304187 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:04:51.531912  304187 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:04:51.561499  304187 cri.go:89] found id: "50e1747ecacea77a5f93e1d46aa99c4cb1fbce08f8f2f546154db1f9be02c796"
	I1009 19:04:51.561580  304187 cri.go:89] found id: "3f0053d1e02ad19ca3e128caaadebe9c4c0975ba7cb365a25e3c6d52870a17f0"
	I1009 19:04:51.561593  304187 cri.go:89] found id: "4e9a584f93742f2382e823963bed9b224f3ccba7c95660eb1dbca3c6b9908b3f"
	I1009 19:04:51.561598  304187 cri.go:89] found id: "f2087bf38944fc739afaac1113f222793a13154e659f060c4cdece3a7fa73071"
	I1009 19:04:51.561601  304187 cri.go:89] found id: "93ca74439d1e37925a704ede2fce384f9f9489c7b96ead6d63884128a4d9b0d1"
	I1009 19:04:51.561605  304187 cri.go:89] found id: "4011ef25cebccd0072dd10b711fedb8be54cb74589db33f0e4a5e667873eed44"
	I1009 19:04:51.561609  304187 cri.go:89] found id: "859a72eb5676e02ccdbfc8116afe2d9c4f2283fc97f6e130eb77ba45fe1f2ddf"
	I1009 19:04:51.561612  304187 cri.go:89] found id: "bb893c39a97db27f01be58b5eec66390173c64aa6dbf5fcc501e526bd34e4f74"
	I1009 19:04:51.561615  304187 cri.go:89] found id: "a9b5e7a178bf7423b4d23385f8409bd2da8f1ec9e312f7a1c786a7b9f1ec78fe"
	I1009 19:04:51.561626  304187 cri.go:89] found id: "cdcd01c9f8f4271bde354d676a0d7b97cf89b90bcce19fbab3de17f21aebb44c"
	I1009 19:04:51.561635  304187 cri.go:89] found id: "39a52fb8859c2040cedab3dbdc0662ae79f7d3abba463258d2c504bf8830448b"
	I1009 19:04:51.561639  304187 cri.go:89] found id: "7b7dc9732ce4b2127334e1e0c5b92a0ae3fbb0d316e98281fb8f8e8269c4b998"
	I1009 19:04:51.561642  304187 cri.go:89] found id: "ec4db71d717ddecd989934884e42ed0846d635333858d9d800181bfa0530c564"
	I1009 19:04:51.561645  304187 cri.go:89] found id: "f7e6d7b389c66f70de1f9dfa7a02589e922c296513ed0a3835867069f4fa9db8"
	I1009 19:04:51.561649  304187 cri.go:89] found id: "fbc396505d84e35ea37081e17222da9738c8ff9edd4bf2e014fdb1bf99f6de56"
	I1009 19:04:51.561654  304187 cri.go:89] found id: "c8fc026ca1019d8a0f4406f6cc4f8f68a03b36d38c792e26f09d7d78bf7ea9e3"
	I1009 19:04:51.561661  304187 cri.go:89] found id: "2823efa103e5ee38b792c909eaeee0c995e8a8302f5b0f522f6d786b3be0e7ba"
	I1009 19:04:51.561666  304187 cri.go:89] found id: "532259f4c5926820e3e18f689f80a1bc102631a6a0a05374223820ef91ec414f"
	I1009 19:04:51.561669  304187 cri.go:89] found id: "d85964586435683cdf29db9f8d8e0fd3637c91ff01ef302bb910a1397cf75b01"
	I1009 19:04:51.561672  304187 cri.go:89] found id: "7fcbf1be4bdef0161b5efca4fb661fd9a8fddc41f80f6974b49d4a8bb8d17634"
	I1009 19:04:51.561676  304187 cri.go:89] found id: "09a19318421aec51d9c6d371040aa4863198795c9d616ed4df00c26edc18b036"
	I1009 19:04:51.561682  304187 cri.go:89] found id: "aaa0ded06ea4b321c4f9a079d4cf69d526ba351445ed008be4734d67b7ea8524"
	I1009 19:04:51.561685  304187 cri.go:89] found id: "804d5a04697a7b5835636d98ba88b94561d9699443a8eadbbe90fd28d0b160cb"
	I1009 19:04:51.561688  304187 cri.go:89] found id: ""
	I1009 19:04:51.561743  304187 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:04:51.577195  304187 out.go:203] 
	W1009 19:04:51.579960  304187 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:04:51.579986  304187 out.go:285] * 
	* 
	W1009 19:04:51.585013  304187 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:04:51.588044  304187 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-999657 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.38s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1009 19:04:47.825919  296002 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1009 19:04:47.831394  296002 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1009 19:04:47.831420  296002 kapi.go:107] duration metric: took 5.515275ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.525694ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-999657 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-999657 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [cc33abd2-ce1a-4803-b81a-9a931f37580b] Pending
helpers_test.go:352: "task-pv-pod" [cc33abd2-ce1a-4803-b81a-9a931f37580b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [cc33abd2-ce1a-4803-b81a-9a931f37580b] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.00351166s
addons_test.go:572: (dbg) Run:  kubectl --context addons-999657 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-999657 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-999657 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-999657 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-999657 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-999657 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [9e094481-8df7-40a9-b685-47bec7e7ac95] Pending
helpers_test.go:352: "task-pv-pod-restore" [9e094481-8df7-40a9-b685-47bec7e7ac95] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [9e094481-8df7-40a9-b685-47bec7e7ac95] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003984622s
addons_test.go:614: (dbg) Run:  kubectl --context addons-999657 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-999657 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-999657 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-999657 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (270.765152ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:05:42.981754  305271 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:05:42.982720  305271 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:05:42.982765  305271 out.go:374] Setting ErrFile to fd 2...
	I1009 19:05:42.982791  305271 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:05:42.983083  305271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:05:42.983429  305271 mustload.go:65] Loading cluster: addons-999657
	I1009 19:05:42.983864  305271 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:05:42.983907  305271 addons.go:606] checking whether the cluster is paused
	I1009 19:05:42.984071  305271 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:05:42.984109  305271 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:05:42.984614  305271 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:05:43.003900  305271 ssh_runner.go:195] Run: systemctl --version
	I1009 19:05:43.003991  305271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:05:43.023027  305271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:05:43.129407  305271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:05:43.129526  305271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:05:43.161416  305271 cri.go:89] found id: "50e1747ecacea77a5f93e1d46aa99c4cb1fbce08f8f2f546154db1f9be02c796"
	I1009 19:05:43.161439  305271 cri.go:89] found id: "3f0053d1e02ad19ca3e128caaadebe9c4c0975ba7cb365a25e3c6d52870a17f0"
	I1009 19:05:43.161444  305271 cri.go:89] found id: "4e9a584f93742f2382e823963bed9b224f3ccba7c95660eb1dbca3c6b9908b3f"
	I1009 19:05:43.161448  305271 cri.go:89] found id: "f2087bf38944fc739afaac1113f222793a13154e659f060c4cdece3a7fa73071"
	I1009 19:05:43.161451  305271 cri.go:89] found id: "93ca74439d1e37925a704ede2fce384f9f9489c7b96ead6d63884128a4d9b0d1"
	I1009 19:05:43.161455  305271 cri.go:89] found id: "4011ef25cebccd0072dd10b711fedb8be54cb74589db33f0e4a5e667873eed44"
	I1009 19:05:43.161459  305271 cri.go:89] found id: "859a72eb5676e02ccdbfc8116afe2d9c4f2283fc97f6e130eb77ba45fe1f2ddf"
	I1009 19:05:43.161462  305271 cri.go:89] found id: "bb893c39a97db27f01be58b5eec66390173c64aa6dbf5fcc501e526bd34e4f74"
	I1009 19:05:43.161466  305271 cri.go:89] found id: "a9b5e7a178bf7423b4d23385f8409bd2da8f1ec9e312f7a1c786a7b9f1ec78fe"
	I1009 19:05:43.161490  305271 cri.go:89] found id: "cdcd01c9f8f4271bde354d676a0d7b97cf89b90bcce19fbab3de17f21aebb44c"
	I1009 19:05:43.161498  305271 cri.go:89] found id: "39a52fb8859c2040cedab3dbdc0662ae79f7d3abba463258d2c504bf8830448b"
	I1009 19:05:43.161502  305271 cri.go:89] found id: "7b7dc9732ce4b2127334e1e0c5b92a0ae3fbb0d316e98281fb8f8e8269c4b998"
	I1009 19:05:43.161506  305271 cri.go:89] found id: "ec4db71d717ddecd989934884e42ed0846d635333858d9d800181bfa0530c564"
	I1009 19:05:43.161509  305271 cri.go:89] found id: "f7e6d7b389c66f70de1f9dfa7a02589e922c296513ed0a3835867069f4fa9db8"
	I1009 19:05:43.161513  305271 cri.go:89] found id: "fbc396505d84e35ea37081e17222da9738c8ff9edd4bf2e014fdb1bf99f6de56"
	I1009 19:05:43.161522  305271 cri.go:89] found id: "c8fc026ca1019d8a0f4406f6cc4f8f68a03b36d38c792e26f09d7d78bf7ea9e3"
	I1009 19:05:43.161531  305271 cri.go:89] found id: "2823efa103e5ee38b792c909eaeee0c995e8a8302f5b0f522f6d786b3be0e7ba"
	I1009 19:05:43.161536  305271 cri.go:89] found id: "532259f4c5926820e3e18f689f80a1bc102631a6a0a05374223820ef91ec414f"
	I1009 19:05:43.161539  305271 cri.go:89] found id: "d85964586435683cdf29db9f8d8e0fd3637c91ff01ef302bb910a1397cf75b01"
	I1009 19:05:43.161542  305271 cri.go:89] found id: "7fcbf1be4bdef0161b5efca4fb661fd9a8fddc41f80f6974b49d4a8bb8d17634"
	I1009 19:05:43.161547  305271 cri.go:89] found id: "09a19318421aec51d9c6d371040aa4863198795c9d616ed4df00c26edc18b036"
	I1009 19:05:43.161550  305271 cri.go:89] found id: "aaa0ded06ea4b321c4f9a079d4cf69d526ba351445ed008be4734d67b7ea8524"
	I1009 19:05:43.161553  305271 cri.go:89] found id: "804d5a04697a7b5835636d98ba88b94561d9699443a8eadbbe90fd28d0b160cb"
	I1009 19:05:43.161573  305271 cri.go:89] found id: ""
	I1009 19:05:43.161632  305271 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:05:43.176337  305271 out.go:203] 
	W1009 19:05:43.177521  305271 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:05:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:05:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:05:43.177540  305271 out.go:285] * 
	* 
	W1009 19:05:43.182533  305271 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:05:43.183612  305271 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-999657 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-999657 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (269.692319ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:05:43.247646  305315 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:05:43.248545  305315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:05:43.248588  305315 out.go:374] Setting ErrFile to fd 2...
	I1009 19:05:43.248620  305315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:05:43.253784  305315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:05:43.254232  305315 mustload.go:65] Loading cluster: addons-999657
	I1009 19:05:43.254678  305315 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:05:43.254699  305315 addons.go:606] checking whether the cluster is paused
	I1009 19:05:43.254844  305315 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:05:43.254862  305315 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:05:43.255398  305315 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:05:43.273488  305315 ssh_runner.go:195] Run: systemctl --version
	I1009 19:05:43.273565  305315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:05:43.297786  305315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:05:43.403837  305315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:05:43.403927  305315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:05:43.434445  305315 cri.go:89] found id: "50e1747ecacea77a5f93e1d46aa99c4cb1fbce08f8f2f546154db1f9be02c796"
	I1009 19:05:43.434465  305315 cri.go:89] found id: "3f0053d1e02ad19ca3e128caaadebe9c4c0975ba7cb365a25e3c6d52870a17f0"
	I1009 19:05:43.434470  305315 cri.go:89] found id: "4e9a584f93742f2382e823963bed9b224f3ccba7c95660eb1dbca3c6b9908b3f"
	I1009 19:05:43.434474  305315 cri.go:89] found id: "f2087bf38944fc739afaac1113f222793a13154e659f060c4cdece3a7fa73071"
	I1009 19:05:43.434478  305315 cri.go:89] found id: "93ca74439d1e37925a704ede2fce384f9f9489c7b96ead6d63884128a4d9b0d1"
	I1009 19:05:43.434493  305315 cri.go:89] found id: "4011ef25cebccd0072dd10b711fedb8be54cb74589db33f0e4a5e667873eed44"
	I1009 19:05:43.434498  305315 cri.go:89] found id: "859a72eb5676e02ccdbfc8116afe2d9c4f2283fc97f6e130eb77ba45fe1f2ddf"
	I1009 19:05:43.434501  305315 cri.go:89] found id: "bb893c39a97db27f01be58b5eec66390173c64aa6dbf5fcc501e526bd34e4f74"
	I1009 19:05:43.434505  305315 cri.go:89] found id: "a9b5e7a178bf7423b4d23385f8409bd2da8f1ec9e312f7a1c786a7b9f1ec78fe"
	I1009 19:05:43.434517  305315 cri.go:89] found id: "cdcd01c9f8f4271bde354d676a0d7b97cf89b90bcce19fbab3de17f21aebb44c"
	I1009 19:05:43.434521  305315 cri.go:89] found id: "39a52fb8859c2040cedab3dbdc0662ae79f7d3abba463258d2c504bf8830448b"
	I1009 19:05:43.434526  305315 cri.go:89] found id: "7b7dc9732ce4b2127334e1e0c5b92a0ae3fbb0d316e98281fb8f8e8269c4b998"
	I1009 19:05:43.434530  305315 cri.go:89] found id: "ec4db71d717ddecd989934884e42ed0846d635333858d9d800181bfa0530c564"
	I1009 19:05:43.434538  305315 cri.go:89] found id: "f7e6d7b389c66f70de1f9dfa7a02589e922c296513ed0a3835867069f4fa9db8"
	I1009 19:05:43.434542  305315 cri.go:89] found id: "fbc396505d84e35ea37081e17222da9738c8ff9edd4bf2e014fdb1bf99f6de56"
	I1009 19:05:43.434547  305315 cri.go:89] found id: "c8fc026ca1019d8a0f4406f6cc4f8f68a03b36d38c792e26f09d7d78bf7ea9e3"
	I1009 19:05:43.434555  305315 cri.go:89] found id: "2823efa103e5ee38b792c909eaeee0c995e8a8302f5b0f522f6d786b3be0e7ba"
	I1009 19:05:43.434561  305315 cri.go:89] found id: "532259f4c5926820e3e18f689f80a1bc102631a6a0a05374223820ef91ec414f"
	I1009 19:05:43.434564  305315 cri.go:89] found id: "d85964586435683cdf29db9f8d8e0fd3637c91ff01ef302bb910a1397cf75b01"
	I1009 19:05:43.434568  305315 cri.go:89] found id: "7fcbf1be4bdef0161b5efca4fb661fd9a8fddc41f80f6974b49d4a8bb8d17634"
	I1009 19:05:43.434572  305315 cri.go:89] found id: "09a19318421aec51d9c6d371040aa4863198795c9d616ed4df00c26edc18b036"
	I1009 19:05:43.434576  305315 cri.go:89] found id: "aaa0ded06ea4b321c4f9a079d4cf69d526ba351445ed008be4734d67b7ea8524"
	I1009 19:05:43.434579  305315 cri.go:89] found id: "804d5a04697a7b5835636d98ba88b94561d9699443a8eadbbe90fd28d0b160cb"
	I1009 19:05:43.434582  305315 cri.go:89] found id: ""
	I1009 19:05:43.434636  305315 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:05:43.448244  305315 out.go:203] 
	W1009 19:05:43.449424  305315 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:05:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:05:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:05:43.449463  305315 out.go:285] * 
	* 
	W1009 19:05:43.454537  305315 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:05:43.455702  305315 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-999657 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (55.64s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-999657 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-999657 --alsologtostderr -v=1: exit status 11 (271.213371ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:04:24.600670  302973 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:04:24.601572  302973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:24.601614  302973 out.go:374] Setting ErrFile to fd 2...
	I1009 19:04:24.601635  302973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:24.601974  302973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:04:24.602342  302973 mustload.go:65] Loading cluster: addons-999657
	I1009 19:04:24.602763  302973 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:24.602803  302973 addons.go:606] checking whether the cluster is paused
	I1009 19:04:24.602998  302973 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:24.603035  302973 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:04:24.603524  302973 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:04:24.620711  302973 ssh_runner.go:195] Run: systemctl --version
	I1009 19:04:24.621006  302973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:04:24.638476  302973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:04:24.742371  302973 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:04:24.742463  302973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:04:24.778088  302973 cri.go:89] found id: "50e1747ecacea77a5f93e1d46aa99c4cb1fbce08f8f2f546154db1f9be02c796"
	I1009 19:04:24.778117  302973 cri.go:89] found id: "3f0053d1e02ad19ca3e128caaadebe9c4c0975ba7cb365a25e3c6d52870a17f0"
	I1009 19:04:24.778122  302973 cri.go:89] found id: "4e9a584f93742f2382e823963bed9b224f3ccba7c95660eb1dbca3c6b9908b3f"
	I1009 19:04:24.778126  302973 cri.go:89] found id: "f2087bf38944fc739afaac1113f222793a13154e659f060c4cdece3a7fa73071"
	I1009 19:04:24.778129  302973 cri.go:89] found id: "93ca74439d1e37925a704ede2fce384f9f9489c7b96ead6d63884128a4d9b0d1"
	I1009 19:04:24.778133  302973 cri.go:89] found id: "4011ef25cebccd0072dd10b711fedb8be54cb74589db33f0e4a5e667873eed44"
	I1009 19:04:24.778136  302973 cri.go:89] found id: "859a72eb5676e02ccdbfc8116afe2d9c4f2283fc97f6e130eb77ba45fe1f2ddf"
	I1009 19:04:24.778139  302973 cri.go:89] found id: "bb893c39a97db27f01be58b5eec66390173c64aa6dbf5fcc501e526bd34e4f74"
	I1009 19:04:24.778142  302973 cri.go:89] found id: "a9b5e7a178bf7423b4d23385f8409bd2da8f1ec9e312f7a1c786a7b9f1ec78fe"
	I1009 19:04:24.778153  302973 cri.go:89] found id: "cdcd01c9f8f4271bde354d676a0d7b97cf89b90bcce19fbab3de17f21aebb44c"
	I1009 19:04:24.778159  302973 cri.go:89] found id: "39a52fb8859c2040cedab3dbdc0662ae79f7d3abba463258d2c504bf8830448b"
	I1009 19:04:24.778168  302973 cri.go:89] found id: "7b7dc9732ce4b2127334e1e0c5b92a0ae3fbb0d316e98281fb8f8e8269c4b998"
	I1009 19:04:24.778175  302973 cri.go:89] found id: "ec4db71d717ddecd989934884e42ed0846d635333858d9d800181bfa0530c564"
	I1009 19:04:24.778179  302973 cri.go:89] found id: "f7e6d7b389c66f70de1f9dfa7a02589e922c296513ed0a3835867069f4fa9db8"
	I1009 19:04:24.778182  302973 cri.go:89] found id: "fbc396505d84e35ea37081e17222da9738c8ff9edd4bf2e014fdb1bf99f6de56"
	I1009 19:04:24.778189  302973 cri.go:89] found id: "c8fc026ca1019d8a0f4406f6cc4f8f68a03b36d38c792e26f09d7d78bf7ea9e3"
	I1009 19:04:24.778195  302973 cri.go:89] found id: "2823efa103e5ee38b792c909eaeee0c995e8a8302f5b0f522f6d786b3be0e7ba"
	I1009 19:04:24.778200  302973 cri.go:89] found id: "532259f4c5926820e3e18f689f80a1bc102631a6a0a05374223820ef91ec414f"
	I1009 19:04:24.778203  302973 cri.go:89] found id: "d85964586435683cdf29db9f8d8e0fd3637c91ff01ef302bb910a1397cf75b01"
	I1009 19:04:24.778206  302973 cri.go:89] found id: "7fcbf1be4bdef0161b5efca4fb661fd9a8fddc41f80f6974b49d4a8bb8d17634"
	I1009 19:04:24.778210  302973 cri.go:89] found id: "09a19318421aec51d9c6d371040aa4863198795c9d616ed4df00c26edc18b036"
	I1009 19:04:24.778213  302973 cri.go:89] found id: "aaa0ded06ea4b321c4f9a079d4cf69d526ba351445ed008be4734d67b7ea8524"
	I1009 19:04:24.778216  302973 cri.go:89] found id: "804d5a04697a7b5835636d98ba88b94561d9699443a8eadbbe90fd28d0b160cb"
	I1009 19:04:24.778219  302973 cri.go:89] found id: ""
	I1009 19:04:24.778275  302973 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:04:24.793489  302973 out.go:203] 
	W1009 19:04:24.796449  302973 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:04:24.796494  302973 out.go:285] * 
	* 
	W1009 19:04:24.801682  302973 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:04:24.804723  302973 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-999657 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-999657
helpers_test.go:243: (dbg) docker inspect addons-999657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd",
	        "Created": "2025-10-09T19:01:42.773639389Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 297173,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:01:42.832045963Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd/hostname",
	        "HostsPath": "/var/lib/docker/containers/ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd/hosts",
	        "LogPath": "/var/lib/docker/containers/ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd/ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd-json.log",
	        "Name": "/addons-999657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-999657:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-999657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd",
	                "LowerDir": "/var/lib/docker/overlay2/38454846971f2b21cec936743dc4c4192a2e913d6fb39fa2ee1d6c41b9b691b6-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38454846971f2b21cec936743dc4c4192a2e913d6fb39fa2ee1d6c41b9b691b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38454846971f2b21cec936743dc4c4192a2e913d6fb39fa2ee1d6c41b9b691b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38454846971f2b21cec936743dc4c4192a2e913d6fb39fa2ee1d6c41b9b691b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-999657",
	                "Source": "/var/lib/docker/volumes/addons-999657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-999657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-999657",
	                "name.minikube.sigs.k8s.io": "addons-999657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "753810abaf007ac4f831309901d634c334dccb43ce0143ff6439762a6a39d5a8",
	            "SandboxKey": "/var/run/docker/netns/753810abaf00",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-999657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:f5:bf:e8:96:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2c1c6236327c66b1abe1475e3f979bdb96192bd80d34b9b787ee03064ac7e95d",
	                    "EndpointID": "dcfa7f8b7c67063e84bdcfacc89b13f37a8ba2fd94f08195d90a8cbda63543e1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-999657",
	                        "ecd6cd18f751"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-999657 -n addons-999657
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-999657 logs -n 25: (1.442110977s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-606818 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-606818   │ jenkins │ v1.37.0 │ 09 Oct 25 18:59 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 09 Oct 25 19:00 UTC │ 09 Oct 25 19:00 UTC │
	│ delete  │ -p download-only-606818                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-606818   │ jenkins │ v1.37.0 │ 09 Oct 25 19:00 UTC │ 09 Oct 25 19:00 UTC │
	│ start   │ -o=json --download-only -p download-only-214075 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-214075   │ jenkins │ v1.37.0 │ 09 Oct 25 19:00 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │ 09 Oct 25 19:01 UTC │
	│ delete  │ -p download-only-214075                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-214075   │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │ 09 Oct 25 19:01 UTC │
	│ delete  │ -p download-only-606818                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-606818   │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │ 09 Oct 25 19:01 UTC │
	│ delete  │ -p download-only-214075                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-214075   │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │ 09 Oct 25 19:01 UTC │
	│ start   │ --download-only -p download-docker-847696 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-847696 │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ delete  │ -p download-docker-847696                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-847696 │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │ 09 Oct 25 19:01 UTC │
	│ start   │ --download-only -p binary-mirror-719553 --alsologtostderr --binary-mirror http://127.0.0.1:39775 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-719553   │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ delete  │ -p binary-mirror-719553                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-719553   │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │ 09 Oct 25 19:01 UTC │
	│ addons  │ enable dashboard -p addons-999657                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ addons  │ disable dashboard -p addons-999657                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ start   │ -p addons-999657 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │ 09 Oct 25 19:04 UTC │
	│ addons  │ addons-999657 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	│ addons  │ addons-999657 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	│ addons  │ enable headlamp -p addons-999657 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-999657          │ jenkins │ v1.37.0 │ 09 Oct 25 19:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:01:16
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:01:16.679364  296772 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:01:16.679508  296772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:01:16.679519  296772 out.go:374] Setting ErrFile to fd 2...
	I1009 19:01:16.679525  296772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:01:16.679777  296772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:01:16.680280  296772 out.go:368] Setting JSON to false
	I1009 19:01:16.681140  296772 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6216,"bootTime":1760030261,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 19:01:16.681207  296772 start.go:143] virtualization:  
	I1009 19:01:16.682707  296772 out.go:179] * [addons-999657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:01:16.684182  296772 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:01:16.684277  296772 notify.go:221] Checking for updates...
	I1009 19:01:16.686820  296772 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:01:16.688293  296772 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:01:16.689379  296772 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 19:01:16.690543  296772 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:01:16.691682  296772 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:01:16.693032  296772 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:01:16.714648  296772 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:01:16.714767  296772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:01:16.776724  296772 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-09 19:01:16.767180971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:01:16.776839  296772 docker.go:319] overlay module found
	I1009 19:01:16.778333  296772 out.go:179] * Using the docker driver based on user configuration
	I1009 19:01:16.779456  296772 start.go:309] selected driver: docker
	I1009 19:01:16.779483  296772 start.go:930] validating driver "docker" against <nil>
	I1009 19:01:16.779498  296772 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:01:16.780233  296772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:01:16.833715  296772 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-09 19:01:16.824873484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:01:16.833874  296772 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 19:01:16.834098  296772 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:01:16.835447  296772 out.go:179] * Using Docker driver with root privileges
	I1009 19:01:16.836619  296772 cni.go:84] Creating CNI manager for ""
	I1009 19:01:16.836694  296772 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:01:16.836709  296772 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:01:16.836782  296772 start.go:353] cluster config:
	{Name:addons-999657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-999657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1009 19:01:16.838936  296772 out.go:179] * Starting "addons-999657" primary control-plane node in "addons-999657" cluster
	I1009 19:01:16.840110  296772 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:01:16.841621  296772 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:01:16.842917  296772 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:01:16.842946  296772 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:01:16.842977  296772 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:01:16.842987  296772 cache.go:58] Caching tarball of preloaded images
	I1009 19:01:16.843080  296772 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:01:16.843090  296772 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:01:16.843412  296772 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/config.json ...
	I1009 19:01:16.843441  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/config.json: {Name:mk995129adb1de29ffda6c1745cc80de4b941c08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:16.858617  296772 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 19:01:16.858757  296772 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1009 19:01:16.858778  296772 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1009 19:01:16.858783  296772 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1009 19:01:16.858790  296772 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1009 19:01:16.858795  296772 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from local cache
	I1009 19:01:34.886055  296772 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from cached tarball
	I1009 19:01:34.886102  296772 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:01:34.886132  296772 start.go:361] acquireMachinesLock for addons-999657: {Name:mk16a18698d56f1afca86a28d3906fc672e3afb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:01:34.886263  296772 start.go:365] duration metric: took 109.17µs to acquireMachinesLock for "addons-999657"
	I1009 19:01:34.886298  296772 start.go:94] Provisioning new machine with config: &{Name:addons-999657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-999657 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:01:34.886389  296772 start.go:126] createHost starting for "" (driver="docker")
	I1009 19:01:34.888004  296772 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1009 19:01:34.888251  296772 start.go:160] libmachine.API.Create for "addons-999657" (driver="docker")
	I1009 19:01:34.888299  296772 client.go:168] LocalClient.Create starting
	I1009 19:01:34.888423  296772 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem
	I1009 19:01:35.335047  296772 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem
	I1009 19:01:36.140376  296772 cli_runner.go:164] Run: docker network inspect addons-999657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:01:36.156680  296772 cli_runner.go:211] docker network inspect addons-999657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:01:36.156766  296772 network_create.go:284] running [docker network inspect addons-999657] to gather additional debugging logs...
	I1009 19:01:36.156788  296772 cli_runner.go:164] Run: docker network inspect addons-999657
	W1009 19:01:36.173149  296772 cli_runner.go:211] docker network inspect addons-999657 returned with exit code 1
	I1009 19:01:36.173182  296772 network_create.go:287] error running [docker network inspect addons-999657]: docker network inspect addons-999657: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-999657 not found
	I1009 19:01:36.173196  296772 network_create.go:289] output of [docker network inspect addons-999657]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-999657 not found
	
	** /stderr **
	I1009 19:01:36.173311  296772 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:01:36.191160  296772 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001965820}
	I1009 19:01:36.191200  296772 network_create.go:124] attempt to create docker network addons-999657 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 19:01:36.191256  296772 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-999657 addons-999657
	I1009 19:01:36.251942  296772 network_create.go:108] docker network addons-999657 192.168.49.0/24 created
	I1009 19:01:36.251977  296772 kic.go:121] calculated static IP "192.168.49.2" for the "addons-999657" container
	I1009 19:01:36.252057  296772 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:01:36.267520  296772 cli_runner.go:164] Run: docker volume create addons-999657 --label name.minikube.sigs.k8s.io=addons-999657 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:01:36.284809  296772 oci.go:103] Successfully created a docker volume addons-999657
	I1009 19:01:36.284907  296772 cli_runner.go:164] Run: docker run --rm --name addons-999657-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-999657 --entrypoint /usr/bin/test -v addons-999657:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:01:38.221698  296772 cli_runner.go:217] Completed: docker run --rm --name addons-999657-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-999657 --entrypoint /usr/bin/test -v addons-999657:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (1.936738061s)
	I1009 19:01:38.221731  296772 oci.go:107] Successfully prepared a docker volume addons-999657
	I1009 19:01:38.221765  296772 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:01:38.221787  296772 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:01:38.221864  296772 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-999657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:01:42.698005  296772 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-999657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.476095654s)
	I1009 19:01:42.698038  296772 kic.go:203] duration metric: took 4.476246125s to extract preloaded images to volume ...
	W1009 19:01:42.698182  296772 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 19:01:42.698298  296772 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:01:42.758663  296772 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-999657 --name addons-999657 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-999657 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-999657 --network addons-999657 --ip 192.168.49.2 --volume addons-999657:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:01:43.048107  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Running}}
	I1009 19:01:43.072696  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:01:43.095687  296772 cli_runner.go:164] Run: docker exec addons-999657 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:01:43.153649  296772 oci.go:144] the created container "addons-999657" has a running status.
	I1009 19:01:43.153678  296772 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa...
	I1009 19:01:43.873086  296772 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:01:43.893017  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:01:43.910063  296772 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:01:43.910084  296772 kic_runner.go:114] Args: [docker exec --privileged addons-999657 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:01:43.949748  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:01:43.968096  296772 machine.go:93] provisionDockerMachine start ...
	I1009 19:01:43.968204  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:43.986422  296772 main.go:141] libmachine: Using SSH client type: native
	I1009 19:01:43.986760  296772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1009 19:01:43.986776  296772 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:01:43.987437  296772 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44722->127.0.0.1:33139: read: connection reset by peer
	I1009 19:01:47.132739  296772 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-999657
	
	I1009 19:01:47.132763  296772 ubuntu.go:182] provisioning hostname "addons-999657"
	I1009 19:01:47.132840  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:47.152468  296772 main.go:141] libmachine: Using SSH client type: native
	I1009 19:01:47.152779  296772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1009 19:01:47.152795  296772 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-999657 && echo "addons-999657" | sudo tee /etc/hostname
	I1009 19:01:47.316441  296772 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-999657
	
	I1009 19:01:47.316531  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:47.336445  296772 main.go:141] libmachine: Using SSH client type: native
	I1009 19:01:47.336760  296772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1009 19:01:47.336782  296772 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-999657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-999657/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-999657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:01:47.481529  296772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:01:47.481557  296772 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 19:01:47.481576  296772 ubuntu.go:190] setting up certificates
	I1009 19:01:47.481593  296772 provision.go:84] configureAuth start
	I1009 19:01:47.481653  296772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-999657
	I1009 19:01:47.505853  296772 provision.go:143] copyHostCerts
	I1009 19:01:47.505948  296772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 19:01:47.506092  296772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 19:01:47.506193  296772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 19:01:47.506269  296772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.addons-999657 san=[127.0.0.1 192.168.49.2 addons-999657 localhost minikube]
	I1009 19:01:47.983830  296772 provision.go:177] copyRemoteCerts
	I1009 19:01:47.983901  296772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:01:47.983943  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:48.002366  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:01:48.109203  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:01:48.128025  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:01:48.146806  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:01:48.164891  296772 provision.go:87] duration metric: took 683.272793ms to configureAuth
	I1009 19:01:48.164918  296772 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:01:48.165185  296772 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:01:48.165295  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:48.182559  296772 main.go:141] libmachine: Using SSH client type: native
	I1009 19:01:48.182863  296772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1009 19:01:48.182882  296772 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:01:48.433417  296772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:01:48.433497  296772 machine.go:96] duration metric: took 4.465372947s to provisionDockerMachine
	I1009 19:01:48.433527  296772 client.go:171] duration metric: took 13.545214098s to LocalClient.Create
	I1009 19:01:48.433590  296772 start.go:168] duration metric: took 13.54533532s to libmachine.API.Create "addons-999657"
	I1009 19:01:48.433626  296772 start.go:294] postStartSetup for "addons-999657" (driver="docker")
	I1009 19:01:48.433665  296772 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:01:48.433798  296772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:01:48.433926  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:48.451913  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:01:48.553255  296772 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:01:48.556509  296772 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:01:48.556538  296772 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:01:48.556550  296772 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 19:01:48.556619  296772 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 19:01:48.556649  296772 start.go:297] duration metric: took 122.990831ms for postStartSetup
	I1009 19:01:48.556958  296772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-999657
	I1009 19:01:48.573496  296772 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/config.json ...
	I1009 19:01:48.573804  296772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:01:48.573857  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:48.591007  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:01:48.690121  296772 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:01:48.695263  296772 start.go:129] duration metric: took 13.808859016s to createHost
	I1009 19:01:48.695312  296772 start.go:84] releasing machines lock for "addons-999657", held for 13.809032804s
	I1009 19:01:48.695423  296772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-999657
	I1009 19:01:48.712595  296772 ssh_runner.go:195] Run: cat /version.json
	I1009 19:01:48.712658  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:48.712922  296772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:01:48.712986  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:01:48.734738  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:01:48.743309  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:01:48.836802  296772 ssh_runner.go:195] Run: systemctl --version
	I1009 19:01:48.930801  296772 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:01:48.966474  296772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:01:48.970908  296772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:01:48.971040  296772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:01:49.000452  296772 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 19:01:49.000479  296772 start.go:496] detecting cgroup driver to use...
	I1009 19:01:49.000520  296772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:01:49.000573  296772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:01:49.017650  296772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:01:49.030871  296772 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:01:49.030939  296772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:01:49.051643  296772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:01:49.070934  296772 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:01:49.189540  296772 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:01:49.312980  296772 docker.go:234] disabling docker service ...
	I1009 19:01:49.313091  296772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:01:49.335452  296772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:01:49.348961  296772 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:01:49.460352  296772 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:01:49.585135  296772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:01:49.598787  296772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:01:49.613903  296772 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:01:49.613991  296772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:01:49.623874  296772 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:01:49.623968  296772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:01:49.635177  296772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:01:49.644806  296772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:01:49.654839  296772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:01:49.663016  296772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:01:49.672400  296772 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:01:49.686417  296772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:01:49.695597  296772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:01:49.703582  296772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:01:49.711318  296772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:01:49.825534  296772 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:01:49.955214  296772 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:01:49.955336  296772 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:01:49.959908  296772 start.go:564] Will wait 60s for crictl version
	I1009 19:01:49.960004  296772 ssh_runner.go:195] Run: which crictl
	I1009 19:01:49.963836  296772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:01:49.988322  296772 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:01:49.988449  296772 ssh_runner.go:195] Run: crio --version
	I1009 19:01:50.016501  296772 ssh_runner.go:195] Run: crio --version
	I1009 19:01:50.056004  296772 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:01:50.060011  296772 cli_runner.go:164] Run: docker network inspect addons-999657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:01:50.079399  296772 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:01:50.083525  296772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:01:50.094449  296772 kubeadm.go:883] updating cluster {Name:addons-999657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-999657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:01:50.094571  296772 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:01:50.094640  296772 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:01:50.135741  296772 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:01:50.135767  296772 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:01:50.135829  296772 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:01:50.162149  296772 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:01:50.162175  296772 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:01:50.162184  296772 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:01:50.162342  296772 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-999657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-999657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:01:50.162439  296772 ssh_runner.go:195] Run: crio config
	I1009 19:01:50.220557  296772 cni.go:84] Creating CNI manager for ""
	I1009 19:01:50.220582  296772 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:01:50.220607  296772 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:01:50.220665  296772 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-999657 NodeName:addons-999657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:01:50.220871  296772 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-999657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:01:50.220968  296772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:01:50.229125  296772 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:01:50.229216  296772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:01:50.236889  296772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 19:01:50.249608  296772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:01:50.263391  296772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1009 19:01:50.276374  296772 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:01:50.280033  296772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:01:50.290043  296772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:01:50.407012  296772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:01:50.424063  296772 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657 for IP: 192.168.49.2
	I1009 19:01:50.424131  296772 certs.go:195] generating shared ca certs ...
	I1009 19:01:50.424165  296772 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:50.424334  296772 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 19:01:50.607990  296772 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt ...
	I1009 19:01:50.608032  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt: {Name:mk0316901a716eaa5700db6d41b8adda1dc81adc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:50.608286  296772 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key ...
	I1009 19:01:50.608303  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key: {Name:mkccde951df0bb8152ae82f675fcd46af7288b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:50.608399  296772 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 19:01:50.944235  296772 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt ...
	I1009 19:01:50.944267  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt: {Name:mk044a571a6d3d56e00aa1ba715adfac50d1bbb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:50.944453  296772 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key ...
	I1009 19:01:50.944467  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key: {Name:mkff81370b2f76c9e643456d05c4c3484afe318e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:50.944549  296772 certs.go:257] generating profile certs ...
	I1009 19:01:50.944614  296772 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.key
	I1009 19:01:50.944632  296772 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt with IP's: []
	I1009 19:01:51.495445  296772 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt ...
	I1009 19:01:51.495480  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: {Name:mkb2b9db7cec29c19c97e0c0966f111d5bee6c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:51.495673  296772 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.key ...
	I1009 19:01:51.495686  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.key: {Name:mk8b7329e68497b88fd53b32009d329d2b491dab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:51.495771  296772 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.key.efddb3c5
	I1009 19:01:51.495792  296772 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.crt.efddb3c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 19:01:52.514573  296772 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.crt.efddb3c5 ...
	I1009 19:01:52.514607  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.crt.efddb3c5: {Name:mk623515e7a0f073c54954239de7cff11f83ba90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:52.514814  296772 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.key.efddb3c5 ...
	I1009 19:01:52.514841  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.key.efddb3c5: {Name:mkd4657a15eb79b60d4dbac583d3114e18057cc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:52.514936  296772 certs.go:382] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.crt.efddb3c5 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.crt
	I1009 19:01:52.515024  296772 certs.go:386] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.key.efddb3c5 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.key
	I1009 19:01:52.515080  296772 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/proxy-client.key
	I1009 19:01:52.515102  296772 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/proxy-client.crt with IP's: []
	I1009 19:01:52.911939  296772 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/proxy-client.crt ...
	I1009 19:01:52.911971  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/proxy-client.crt: {Name:mk84307c805f583c0c3d20a25774dc0045ed0754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:52.912146  296772 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/proxy-client.key ...
	I1009 19:01:52.912160  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/proxy-client.key: {Name:mk1689535ab330d4a3aed12d8422f75e38ce76ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:01:52.912363  296772 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:01:52.912405  296772 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:01:52.912436  296772 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:01:52.912464  296772 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 19:01:52.913035  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:01:52.932109  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:01:52.950787  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:01:52.969762  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:01:52.988228  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 19:01:53.006024  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:01:53.023747  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:01:53.044992  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:01:53.064835  296772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:01:53.083531  296772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:01:53.096404  296772 ssh_runner.go:195] Run: openssl version
	I1009 19:01:53.103129  296772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:01:53.111827  296772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:01:53.115783  296772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:01:53.115853  296772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:01:53.159088  296772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:01:53.167702  296772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:01:53.171297  296772 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:01:53.171349  296772 kubeadm.go:400] StartCluster: {Name:addons-999657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-999657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:01:53.171428  296772 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:01:53.171491  296772 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:01:53.200401  296772 cri.go:89] found id: ""
	I1009 19:01:53.200474  296772 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:01:53.208234  296772 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:01:53.216038  296772 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:01:53.216144  296772 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:01:53.224015  296772 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:01:53.224046  296772 kubeadm.go:157] found existing configuration files:
	
	I1009 19:01:53.224099  296772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:01:53.231763  296772 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:01:53.231831  296772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:01:53.239384  296772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:01:53.247204  296772 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:01:53.247362  296772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:01:53.255034  296772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:01:53.262632  296772 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:01:53.262739  296772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:01:53.270060  296772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:01:53.277688  296772 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:01:53.277802  296772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:01:53.285265  296772 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:01:53.323522  296772 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:01:53.323744  296772 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:01:53.346274  296772 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:01:53.346352  296772 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 19:01:53.346397  296772 kubeadm.go:318] OS: Linux
	I1009 19:01:53.346449  296772 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:01:53.346504  296772 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 19:01:53.346557  296772 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:01:53.346612  296772 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:01:53.346684  296772 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:01:53.346743  296772 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:01:53.346794  296772 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:01:53.346847  296772 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:01:53.346900  296772 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 19:01:53.429314  296772 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:01:53.429456  296772 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:01:53.429556  296772 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:01:53.437921  296772 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:01:53.442201  296772 out.go:252]   - Generating certificates and keys ...
	I1009 19:01:53.442309  296772 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:01:53.442449  296772 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:01:53.706989  296772 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:01:53.930872  296772 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:01:54.281879  296772 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:01:54.540236  296772 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:01:55.186843  296772 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:01:55.186984  296772 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-999657 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:01:57.182473  296772 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:01:57.182778  296772 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-999657 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 19:01:57.962862  296772 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:01:58.518577  296772 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:01:59.250353  296772 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:01:59.250651  296772 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:02:00.094661  296772 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:02:02.171797  296772 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:02:03.077326  296772 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:02:03.460547  296772 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:02:03.689089  296772 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:02:03.690151  296772 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:02:03.694233  296772 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:02:03.697880  296772 out.go:252]   - Booting up control plane ...
	I1009 19:02:03.697998  296772 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:02:03.698087  296772 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:02:03.699210  296772 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:02:03.715880  296772 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:02:03.715997  296772 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:02:03.723989  296772 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:02:03.724257  296772 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:02:03.724454  296772 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:02:03.855157  296772 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:02:03.855287  296772 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:02:04.365525  296772 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 508.078807ms
	I1009 19:02:04.366829  296772 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:02:04.367057  296772 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 19:02:04.367165  296772 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:02:04.367258  296772 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:02:08.086551  296772 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.718900243s
	I1009 19:02:10.876673  296772 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.509522896s
	I1009 19:02:11.370887  296772 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.003232749s
	I1009 19:02:11.398297  296772 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 19:02:11.410902  296772 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 19:02:11.427132  296772 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 19:02:11.427358  296772 kubeadm.go:318] [mark-control-plane] Marking the node addons-999657 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 19:02:11.439327  296772 kubeadm.go:318] [bootstrap-token] Using token: diu2ln.o4wtypfu62jwn63h
	I1009 19:02:11.442470  296772 out.go:252]   - Configuring RBAC rules ...
	I1009 19:02:11.442602  296772 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 19:02:11.448346  296772 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 19:02:11.459650  296772 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 19:02:11.463889  296772 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 19:02:11.468295  296772 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 19:02:11.472697  296772 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 19:02:11.778331  296772 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 19:02:12.208024  296772 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 19:02:12.778626  296772 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 19:02:12.778650  296772 kubeadm.go:318] 
	I1009 19:02:12.778715  296772 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 19:02:12.778725  296772 kubeadm.go:318] 
	I1009 19:02:12.778806  296772 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 19:02:12.778816  296772 kubeadm.go:318] 
	I1009 19:02:12.778843  296772 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 19:02:12.778908  296772 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 19:02:12.778965  296772 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 19:02:12.778974  296772 kubeadm.go:318] 
	I1009 19:02:12.779031  296772 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 19:02:12.779039  296772 kubeadm.go:318] 
	I1009 19:02:12.779088  296772 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 19:02:12.779096  296772 kubeadm.go:318] 
	I1009 19:02:12.779150  296772 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 19:02:12.779231  296772 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 19:02:12.779310  296772 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 19:02:12.779320  296772 kubeadm.go:318] 
	I1009 19:02:12.779407  296772 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 19:02:12.779490  296772 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 19:02:12.779510  296772 kubeadm.go:318] 
	I1009 19:02:12.779598  296772 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token diu2ln.o4wtypfu62jwn63h \
	I1009 19:02:12.779709  296772 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e766d16640f098061f552dd476e80ebd3809bd57b4957045222f32c55d34903e \
	I1009 19:02:12.779734  296772 kubeadm.go:318] 	--control-plane 
	I1009 19:02:12.779739  296772 kubeadm.go:318] 
	I1009 19:02:12.779830  296772 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 19:02:12.779840  296772 kubeadm.go:318] 
	I1009 19:02:12.779925  296772 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token diu2ln.o4wtypfu62jwn63h \
	I1009 19:02:12.780035  296772 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e766d16640f098061f552dd476e80ebd3809bd57b4957045222f32c55d34903e 
	I1009 19:02:12.784264  296772 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 19:02:12.784491  296772 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 19:02:12.784597  296772 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:02:12.784617  296772 cni.go:84] Creating CNI manager for ""
	I1009 19:02:12.784625  296772 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:02:12.787862  296772 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1009 19:02:12.790845  296772 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 19:02:12.795311  296772 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 19:02:12.795333  296772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 19:02:12.808533  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 19:02:13.108853  296772 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 19:02:13.108975  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:13.109062  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-999657 minikube.k8s.io/updated_at=2025_10_09T19_02_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb minikube.k8s.io/name=addons-999657 minikube.k8s.io/primary=true
	I1009 19:02:13.131923  296772 ops.go:34] apiserver oom_adj: -16
	I1009 19:02:13.255007  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:13.755602  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:14.255388  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:14.755890  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:15.256089  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:15.755104  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:16.255619  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:16.755695  296772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:02:16.871691  296772 kubeadm.go:1113] duration metric: took 3.762774782s to wait for elevateKubeSystemPrivileges
	I1009 19:02:16.871721  296772 kubeadm.go:402] duration metric: took 23.700375217s to StartCluster
	I1009 19:02:16.871739  296772 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:16.871857  296772 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:02:16.872247  296772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:16.872438  296772 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:02:16.872616  296772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 19:02:16.872880  296772 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:16.872913  296772 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1009 19:02:16.872983  296772 addons.go:69] Setting yakd=true in profile "addons-999657"
	I1009 19:02:16.872997  296772 addons.go:238] Setting addon yakd=true in "addons-999657"
	I1009 19:02:16.873019  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.873163  296772 addons.go:69] Setting inspektor-gadget=true in profile "addons-999657"
	I1009 19:02:16.873185  296772 addons.go:238] Setting addon inspektor-gadget=true in "addons-999657"
	I1009 19:02:16.873216  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.873541  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.873662  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.873932  296772 addons.go:69] Setting metrics-server=true in profile "addons-999657"
	I1009 19:02:16.873953  296772 addons.go:238] Setting addon metrics-server=true in "addons-999657"
	I1009 19:02:16.873975  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.874390  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.876497  296772 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-999657"
	I1009 19:02:16.876531  296772 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-999657"
	I1009 19:02:16.876566  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.877026  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.877398  296772 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-999657"
	I1009 19:02:16.877463  296772 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-999657"
	I1009 19:02:16.877615  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.879241  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.882494  296772 addons.go:69] Setting cloud-spanner=true in profile "addons-999657"
	I1009 19:02:16.882537  296772 addons.go:238] Setting addon cloud-spanner=true in "addons-999657"
	I1009 19:02:16.882572  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.883108  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.884515  296772 addons.go:69] Setting registry=true in profile "addons-999657"
	I1009 19:02:16.884544  296772 addons.go:238] Setting addon registry=true in "addons-999657"
	I1009 19:02:16.884581  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.885060  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.889480  296772 addons.go:69] Setting registry-creds=true in profile "addons-999657"
	I1009 19:02:16.889524  296772 addons.go:238] Setting addon registry-creds=true in "addons-999657"
	I1009 19:02:16.889560  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.890052  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.892322  296772 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-999657"
	I1009 19:02:16.892395  296772 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-999657"
	I1009 19:02:16.892427  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.892993  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.898352  296772 addons.go:69] Setting storage-provisioner=true in profile "addons-999657"
	I1009 19:02:16.898398  296772 addons.go:238] Setting addon storage-provisioner=true in "addons-999657"
	I1009 19:02:16.898433  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.898891  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.905254  296772 addons.go:69] Setting default-storageclass=true in profile "addons-999657"
	I1009 19:02:16.905616  296772 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-999657"
	I1009 19:02:16.905984  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.920984  296772 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-999657"
	I1009 19:02:16.921020  296772 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-999657"
	I1009 19:02:16.921528  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.922299  296772 addons.go:69] Setting gcp-auth=true in profile "addons-999657"
	I1009 19:02:16.922332  296772 mustload.go:65] Loading cluster: addons-999657
	I1009 19:02:16.922546  296772 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:16.922808  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.935250  296772 addons.go:69] Setting volcano=true in profile "addons-999657"
	I1009 19:02:16.935284  296772 addons.go:238] Setting addon volcano=true in "addons-999657"
	I1009 19:02:16.935322  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.935792  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.943159  296772 addons.go:69] Setting ingress=true in profile "addons-999657"
	I1009 19:02:16.943196  296772 addons.go:238] Setting addon ingress=true in "addons-999657"
	I1009 19:02:16.943239  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.943729  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.954476  296772 addons.go:69] Setting volumesnapshots=true in profile "addons-999657"
	I1009 19:02:16.954518  296772 addons.go:238] Setting addon volumesnapshots=true in "addons-999657"
	I1009 19:02:16.954553  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.955045  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.955185  296772 addons.go:69] Setting ingress-dns=true in profile "addons-999657"
	I1009 19:02:16.955198  296772 addons.go:238] Setting addon ingress-dns=true in "addons-999657"
	I1009 19:02:16.955224  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:16.955611  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:16.969704  296772 out.go:179] * Verifying Kubernetes components...
	I1009 19:02:17.025539  296772 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1009 19:02:17.130068  296772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:02:17.142363  296772 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1009 19:02:17.143917  296772 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1009 19:02:17.146918  296772 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 19:02:17.147023  296772 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 19:02:17.147161  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.160426  296772 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1009 19:02:17.163360  296772 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1009 19:02:17.163390  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1009 19:02:17.163455  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.182109  296772 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-999657"
	I1009 19:02:17.182152  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:17.182744  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:17.146942  296772 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1009 19:02:17.198031  296772 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1009 19:02:17.198209  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.211571  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1009 19:02:17.146958  296772 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1009 19:02:17.212161  296772 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1009 19:02:17.212267  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.197523  296772 addons.go:238] Setting addon default-storageclass=true in "addons-999657"
	I1009 19:02:17.214159  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:17.197544  296772 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1009 19:02:17.197550  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1009 19:02:17.197554  296772 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	W1009 19:02:17.197893  296772 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1009 19:02:17.215043  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:17.238923  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:17.240602  296772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 19:02:17.265175  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1009 19:02:17.269434  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1009 19:02:17.275715  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1009 19:02:17.276082  296772 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1009 19:02:17.298874  296772 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1009 19:02:17.302980  296772 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 19:02:17.303048  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1009 19:02:17.303152  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.309192  296772 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 19:02:17.313238  296772 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 19:02:17.318938  296772 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 19:02:17.319016  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1009 19:02:17.319122  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.319308  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.320120  296772 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1009 19:02:17.320343  296772 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1009 19:02:17.320534  296772 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1009 19:02:17.320547  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1009 19:02:17.320605  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.348467  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1009 19:02:17.351702  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1009 19:02:17.354629  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1009 19:02:17.358905  296772 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1009 19:02:17.341574  296772 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1009 19:02:17.363352  296772 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1009 19:02:17.341589  296772 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:02:17.341633  296772 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 19:02:17.369546  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1009 19:02:17.341653  296772 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1009 19:02:17.369571  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1009 19:02:17.369652  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.372383  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.375298  296772 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1009 19:02:17.375372  296772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1009 19:02:17.375472  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.381310  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.363303  296772 out.go:179]   - Using image docker.io/registry:3.0.0
	I1009 19:02:17.381772  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.394795  296772 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:02:17.394817  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:02:17.394881  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.412283  296772 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1009 19:02:17.412303  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1009 19:02:17.412383  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.416946  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.421604  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.448568  296772 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:02:17.448590  296772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:02:17.448656  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.449226  296772 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1009 19:02:17.455501  296772 out.go:179]   - Using image docker.io/busybox:stable
	I1009 19:02:17.461252  296772 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 19:02:17.461284  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1009 19:02:17.461353  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:17.537349  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.561655  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.569258  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.583393  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.598493  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.614264  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.617004  296772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:02:17.616904  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.620359  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.630415  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.638218  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:17.646030  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	W1009 19:02:17.647815  296772 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1009 19:02:17.647846  296772 retry.go:31] will retry after 191.166269ms: ssh: handshake failed: EOF
	W1009 19:02:17.648056  296772 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1009 19:02:17.648069  296772 retry.go:31] will retry after 185.806398ms: ssh: handshake failed: EOF
	I1009 19:02:18.292967  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 19:02:18.363586  296772 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1009 19:02:18.363653  296772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1009 19:02:18.390691  296772 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1009 19:02:18.390767  296772 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1009 19:02:18.403835  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1009 19:02:18.417256  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:02:18.451364  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1009 19:02:18.453682  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 19:02:18.463989  296772 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1009 19:02:18.464063  296772 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1009 19:02:18.482145  296772 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 19:02:18.482219  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1009 19:02:18.511906  296772 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1009 19:02:18.511983  296772 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1009 19:02:18.520580  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 19:02:18.530824  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 19:02:18.539830  296772 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1009 19:02:18.539906  296772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1009 19:02:18.543672  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1009 19:02:18.565433  296772 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:18.565510  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1009 19:02:18.591582  296772 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1009 19:02:18.591610  296772 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1009 19:02:18.602946  296772 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1009 19:02:18.602972  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1009 19:02:18.664415  296772 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 19:02:18.664442  296772 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 19:02:18.691164  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:02:18.736210  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:18.745350  296772 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1009 19:02:18.745382  296772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1009 19:02:18.755057  296772 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1009 19:02:18.755084  296772 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1009 19:02:18.787758  296772 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1009 19:02:18.787798  296772 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1009 19:02:18.831733  296772 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 19:02:18.831779  296772 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 19:02:18.833040  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1009 19:02:18.959342  296772 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1009 19:02:18.959365  296772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1009 19:02:18.966191  296772 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1009 19:02:18.966213  296772 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1009 19:02:18.969509  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 19:02:18.971390  296772 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1009 19:02:18.971406  296772 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1009 19:02:19.152276  296772 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1009 19:02:19.152349  296772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1009 19:02:19.178753  296772 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1009 19:02:19.178822  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1009 19:02:19.249679  296772 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 19:02:19.249749  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1009 19:02:19.299279  296772 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1009 19:02:19.299355  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1009 19:02:19.318885  296772 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1009 19:02:19.318963  296772 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1009 19:02:19.370928  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1009 19:02:19.376097  296772 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1009 19:02:19.376119  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1009 19:02:19.418347  296772 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.177674775s)
	I1009 19:02:19.418377  296772 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1009 19:02:19.418443  296772 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.801419696s)
	I1009 19:02:19.419208  296772 node_ready.go:35] waiting up to 6m0s for node "addons-999657" to be "Ready" ...
	I1009 19:02:19.524864  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 19:02:19.628438  296772 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1009 19:02:19.628463  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1009 19:02:19.804109  296772 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 19:02:19.804134  296772 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1009 19:02:19.916858  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 19:02:19.924150  296772 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-999657" context rescaled to 1 replicas
	W1009 19:02:21.462792  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:23.221940  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.92889567s)
	I1009 19:02:23.221975  296772 addons.go:479] Verifying addon ingress=true in "addons-999657"
	I1009 19:02:23.222060  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.818139453s)
	I1009 19:02:23.222148  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.804817631s)
	I1009 19:02:23.222194  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.770760015s)
	I1009 19:02:23.222260  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.768506929s)
	I1009 19:02:23.222507  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.701861401s)
	I1009 19:02:23.222564  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.691661522s)
	I1009 19:02:23.222596  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.6788585s)
	I1009 19:02:23.222637  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.531449671s)
	I1009 19:02:23.222937  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.486687201s)
	W1009 19:02:23.222966  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:23.222982  296772 retry.go:31] will retry after 326.662579ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:23.223014  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.389949918s)
	I1009 19:02:23.223024  296772 addons.go:479] Verifying addon registry=true in "addons-999657"
	I1009 19:02:23.223479  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.253938567s)
	I1009 19:02:23.223497  296772 addons.go:479] Verifying addon metrics-server=true in "addons-999657"
	I1009 19:02:23.223535  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.852578615s)
	I1009 19:02:23.223658  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.698765181s)
	W1009 19:02:23.223678  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 19:02:23.223690  296772 retry.go:31] will retry after 369.744237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 19:02:23.225836  296772 out.go:179] * Verifying ingress addon...
	I1009 19:02:23.227957  296772 out.go:179] * Verifying registry addon...
	I1009 19:02:23.227973  296772 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-999657 service yakd-dashboard -n yakd-dashboard
	
	I1009 19:02:23.230764  296772 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1009 19:02:23.233823  296772 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1009 19:02:23.236975  296772 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1009 19:02:23.236994  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1009 19:02:23.242537  296772 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1009 19:02:23.292030  296772 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 19:02:23.292057  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:23.493373  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.576465997s)
	I1009 19:02:23.493459  296772 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-999657"
	I1009 19:02:23.496704  296772 out.go:179] * Verifying csi-hostpath-driver addon...
	I1009 19:02:23.501548  296772 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1009 19:02:23.507130  296772 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 19:02:23.507156  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:23.550495  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:23.594370  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 19:02:23.745805  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:23.745961  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1009 19:02:23.923211  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:24.010821  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:24.236729  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:24.238343  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:24.506122  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:24.628713  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.078177838s)
	W1009 19:02:24.628748  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:24.628769  296772 retry.go:31] will retry after 435.774096ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:24.628859  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.034455942s)
	I1009 19:02:24.734899  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:24.737645  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:24.970027  296772 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1009 19:02:24.970116  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:24.987218  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:25.005663  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:25.064753  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:25.113195  296772 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1009 19:02:25.129726  296772 addons.go:238] Setting addon gcp-auth=true in "addons-999657"
	I1009 19:02:25.129778  296772 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:02:25.130238  296772 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:02:25.158584  296772 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1009 19:02:25.158635  296772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:02:25.177902  296772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:02:25.234392  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:25.236294  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:25.505554  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:25.735177  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:25.742866  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:25.905789  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:25.905828  296772 retry.go:31] will retry after 385.413564ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:25.909505  296772 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 19:02:25.912481  296772 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1009 19:02:25.915230  296772 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1009 19:02:25.915255  296772 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1009 19:02:25.930665  296772 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1009 19:02:25.930688  296772 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1009 19:02:25.944562  296772 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 19:02:25.944586  296772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1009 19:02:25.958506  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 19:02:26.005041  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:26.235488  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:26.238116  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:26.292400  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1009 19:02:26.430002  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:26.521858  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:26.568749  296772 addons.go:479] Verifying addon gcp-auth=true in "addons-999657"
	I1009 19:02:26.571967  296772 out.go:179] * Verifying gcp-auth addon...
	I1009 19:02:26.575714  296772 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1009 19:02:26.579054  296772 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1009 19:02:26.579078  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:26.740069  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:26.740467  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:27.004763  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:27.078618  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1009 19:02:27.210908  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:27.210938  296772 retry.go:31] will retry after 642.044981ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:27.234162  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:27.236655  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:27.505554  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:27.579360  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:27.742325  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:27.742941  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:27.853255  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:28.005289  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:28.079550  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:28.238422  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:28.239028  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:28.504856  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:28.579493  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1009 19:02:28.671747  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:28.671780  296772 retry.go:31] will retry after 875.659797ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:28.734229  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:28.736685  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:28.922745  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:29.006183  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:29.079092  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:29.234454  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:29.236804  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:29.504609  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:29.547745  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:29.579116  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:29.735020  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:29.737210  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:30.004711  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:30.089864  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:30.238919  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:30.239624  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1009 19:02:30.391750  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:30.391783  296772 retry.go:31] will retry after 2.340587157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:30.504452  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:30.579268  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:30.734979  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:30.737238  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:31.005297  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:31.079377  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:31.234569  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:31.236632  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:31.422724  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:31.504535  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:31.579623  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:31.735940  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:31.737613  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:32.005914  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:32.078909  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:32.234298  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:32.236523  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:32.504379  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:32.579198  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:32.733575  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:32.746517  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:32.746813  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:33.005614  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:33.079676  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:33.234686  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:33.237131  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:33.425045  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:33.504728  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 19:02:33.543312  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:33.543390  296772 retry.go:31] will retry after 2.399666522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:33.579349  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:33.739809  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:33.742840  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:34.005018  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:34.079297  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:34.234939  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:34.237265  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:34.505412  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:34.579318  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:34.735148  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:34.742230  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:35.004695  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:35.078635  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:35.233865  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:35.236184  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:35.505059  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:35.579181  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:35.734907  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:35.737196  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:35.922215  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:35.943478  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:36.008993  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:36.078991  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:36.235475  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:36.237255  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:36.505209  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:36.579745  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:36.738152  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:36.742355  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:36.773640  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:36.773674  296772 retry.go:31] will retry after 6.060744408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:37.004813  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:37.078837  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:37.234842  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:37.237012  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:37.504775  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:37.578729  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:37.740891  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:37.740979  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:37.922959  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:38.004581  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:38.079757  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:38.234489  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:38.236938  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:38.505203  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:38.579386  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:38.735787  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:38.738538  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:39.006049  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:39.079721  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:39.234049  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:39.236579  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:39.504726  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:39.579759  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:39.733972  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:39.736360  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:40.005472  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:40.086286  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:40.235084  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:40.238188  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:40.422482  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:40.504466  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:40.579541  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:40.736982  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:40.738263  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:41.005436  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:41.080001  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:41.234453  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:41.236436  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:41.504560  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:41.579675  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:41.736332  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:41.741723  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:42.004493  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:42.079760  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:42.235569  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:42.237598  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:42.505376  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:42.579434  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:42.734383  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:42.741144  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:42.835375  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1009 19:02:42.922734  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:43.005307  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:43.079914  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:43.234613  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:43.236408  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:43.505181  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:43.579552  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1009 19:02:43.652311  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:43.652347  296772 retry.go:31] will retry after 9.23352868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:43.734906  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:43.740048  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:44.004569  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:44.079651  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:44.235672  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:44.237067  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:44.505280  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:44.579251  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:44.735515  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:44.737890  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:44.922896  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:45.004579  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:45.080688  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:45.238073  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:45.238164  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:45.505719  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:45.580209  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:45.739656  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:45.740422  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:46.004742  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:46.082188  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:46.235335  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:46.237831  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:46.505570  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:46.579412  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:46.735361  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:46.737512  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:46.923283  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:47.005226  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:47.079326  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:47.234519  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:47.236649  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:47.505631  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:47.579575  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:47.736220  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:47.738505  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:48.004926  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:48.078973  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:48.233980  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:48.237218  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:48.505708  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:48.579609  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:48.735355  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:48.737823  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:49.005322  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:49.079519  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:49.234994  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:49.237161  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:49.422272  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:49.505487  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:49.579864  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:49.734405  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:49.736747  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:50.004578  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:50.079898  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:50.234467  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:50.236744  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:50.505759  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:50.578600  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:50.738090  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:50.739827  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:51.005199  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:51.079530  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:51.234843  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:51.237354  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:51.422525  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:51.504683  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:51.578955  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:51.736011  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:51.737597  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:52.004989  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:52.078973  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:52.234658  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:52.236925  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:52.505411  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:52.579622  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:52.740742  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:52.744193  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:52.886290  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:02:53.005147  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:53.079368  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:53.234860  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:53.237189  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:53.423282  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:53.506303  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:53.580448  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1009 19:02:53.710854  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:53.710892  296772 retry.go:31] will retry after 10.565917129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:53.735899  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:53.737404  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:54.004848  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:54.079275  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:54.234639  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:54.237285  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:54.505251  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:54.579378  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:54.735859  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:54.737905  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:55.005035  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:55.079499  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:55.235265  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:55.236843  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:55.505439  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:55.579413  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:55.735919  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:55.740967  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:02:55.923207  296772 node_ready.go:57] node "addons-999657" has "Ready":"False" status (will retry)
	I1009 19:02:56.005018  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:56.078910  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:56.234192  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:56.236879  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:56.505534  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:56.579341  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:56.734655  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:56.737211  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:57.005068  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:57.079002  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:57.234751  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:57.237144  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:57.505277  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:57.579260  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:57.735068  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:57.737403  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:58.064586  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:58.130212  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:58.287242  296772 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 19:02:58.287267  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:58.287658  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:58.471744  296772 node_ready.go:49] node "addons-999657" is "Ready"
	I1009 19:02:58.471777  296772 node_ready.go:38] duration metric: took 39.052545505s for node "addons-999657" to be "Ready" ...
	I1009 19:02:58.471791  296772 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:02:58.471850  296772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:02:58.505398  296772 api_server.go:72] duration metric: took 41.632931932s to wait for apiserver process to appear ...
	I1009 19:02:58.505424  296772 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:02:58.505452  296772 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1009 19:02:58.528264  296772 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 19:02:58.528290  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:58.528780  296772 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1009 19:02:58.538888  296772 api_server.go:141] control plane version: v1.34.1
	I1009 19:02:58.538922  296772 api_server.go:131] duration metric: took 33.489056ms to wait for apiserver health ...
	I1009 19:02:58.538932  296772 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:02:58.556864  296772 system_pods.go:59] 19 kube-system pods found
	I1009 19:02:58.556909  296772 system_pods.go:61] "coredns-66bc5c9577-dm266" [2bf7787a-2738-43b9-8632-2b4157093789] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:02:58.556917  296772 system_pods.go:61] "csi-hostpath-attacher-0" [d9525068-4ed2-4fb0-a039-6768fd5cb26d] Pending
	I1009 19:02:58.556924  296772 system_pods.go:61] "csi-hostpath-resizer-0" [fe8c3320-7341-44ca-a991-ceaa16169a16] Pending
	I1009 19:02:58.556928  296772 system_pods.go:61] "csi-hostpathplugin-4b7rw" [5d573ce2-a134-413f-bb7e-939b263f86b2] Pending
	I1009 19:02:58.556933  296772 system_pods.go:61] "etcd-addons-999657" [e291d5e1-16ff-4e53-a970-58bf8afdf50c] Running
	I1009 19:02:58.556937  296772 system_pods.go:61] "kindnet-rztm2" [cbd574f9-584b-4118-ac18-abf4a715e249] Running
	I1009 19:02:58.556942  296772 system_pods.go:61] "kube-apiserver-addons-999657" [b8c9a6f4-5ae1-4faa-93f7-41c4f1100242] Running
	I1009 19:02:58.556947  296772 system_pods.go:61] "kube-controller-manager-addons-999657" [2e50195e-1447-4d4f-9ac4-c40cd15dcd11] Running
	I1009 19:02:58.556957  296772 system_pods.go:61] "kube-ingress-dns-minikube" [a0996862-e9b2-4ebd-9384-3018e153d32b] Pending
	I1009 19:02:58.556962  296772 system_pods.go:61] "kube-proxy-jcwfl" [07e1a1bf-5df2-4e42-8302-7d69acb08479] Running
	I1009 19:02:58.556969  296772 system_pods.go:61] "kube-scheduler-addons-999657" [c297d383-d2f4-4f34-82f6-76d68291ccbf] Running
	I1009 19:02:58.556977  296772 system_pods.go:61] "metrics-server-85b7d694d7-qgbgn" [1b9f013c-1ebf-4d60-b677-f20de508376a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 19:02:58.556987  296772 system_pods.go:61] "nvidia-device-plugin-daemonset-4lmwx" [2cd943cc-3d6e-418d-ab07-d6fe025ccc38] Pending
	I1009 19:02:58.556994  296772 system_pods.go:61] "registry-66898fdd98-d8jgl" [d398f51d-a918-4ce0-89c9-47064bd1ae01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 19:02:58.557001  296772 system_pods.go:61] "registry-creds-764b6fb674-gq9vn" [bbaa910d-1ec1-4260-9cf0-961ed5abd1c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 19:02:58.557009  296772 system_pods.go:61] "registry-proxy-q9p6k" [ae33fd9b-bb98-4d1b-9150-ab438ca12680] Pending
	I1009 19:02:58.557017  296772 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jp7nw" [56d55ae5-a807-484f-868d-0d3f3d1b14f6] Pending
	I1009 19:02:58.557022  296772 system_pods.go:61] "snapshot-controller-7d9fbc56b8-txqvb" [89dc3810-4bbb-4414-91c4-558f2d3651fd] Pending
	I1009 19:02:58.557038  296772 system_pods.go:61] "storage-provisioner" [eb0bf8bf-b888-410e-86f0-da0dec609732] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:02:58.557045  296772 system_pods.go:74] duration metric: took 18.105943ms to wait for pod list to return data ...
	I1009 19:02:58.557056  296772 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:02:58.567798  296772 default_sa.go:45] found service account: "default"
	I1009 19:02:58.567826  296772 default_sa.go:55] duration metric: took 10.761935ms for default service account to be created ...
	I1009 19:02:58.567837  296772 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:02:58.604373  296772 system_pods.go:86] 19 kube-system pods found
	I1009 19:02:58.604413  296772 system_pods.go:89] "coredns-66bc5c9577-dm266" [2bf7787a-2738-43b9-8632-2b4157093789] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:02:58.604423  296772 system_pods.go:89] "csi-hostpath-attacher-0" [d9525068-4ed2-4fb0-a039-6768fd5cb26d] Pending
	I1009 19:02:58.604428  296772 system_pods.go:89] "csi-hostpath-resizer-0" [fe8c3320-7341-44ca-a991-ceaa16169a16] Pending
	I1009 19:02:58.604433  296772 system_pods.go:89] "csi-hostpathplugin-4b7rw" [5d573ce2-a134-413f-bb7e-939b263f86b2] Pending
	I1009 19:02:58.604437  296772 system_pods.go:89] "etcd-addons-999657" [e291d5e1-16ff-4e53-a970-58bf8afdf50c] Running
	I1009 19:02:58.604442  296772 system_pods.go:89] "kindnet-rztm2" [cbd574f9-584b-4118-ac18-abf4a715e249] Running
	I1009 19:02:58.604446  296772 system_pods.go:89] "kube-apiserver-addons-999657" [b8c9a6f4-5ae1-4faa-93f7-41c4f1100242] Running
	I1009 19:02:58.604451  296772 system_pods.go:89] "kube-controller-manager-addons-999657" [2e50195e-1447-4d4f-9ac4-c40cd15dcd11] Running
	I1009 19:02:58.604459  296772 system_pods.go:89] "kube-ingress-dns-minikube" [a0996862-e9b2-4ebd-9384-3018e153d32b] Pending
	I1009 19:02:58.604463  296772 system_pods.go:89] "kube-proxy-jcwfl" [07e1a1bf-5df2-4e42-8302-7d69acb08479] Running
	I1009 19:02:58.604474  296772 system_pods.go:89] "kube-scheduler-addons-999657" [c297d383-d2f4-4f34-82f6-76d68291ccbf] Running
	I1009 19:02:58.604480  296772 system_pods.go:89] "metrics-server-85b7d694d7-qgbgn" [1b9f013c-1ebf-4d60-b677-f20de508376a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 19:02:58.604491  296772 system_pods.go:89] "nvidia-device-plugin-daemonset-4lmwx" [2cd943cc-3d6e-418d-ab07-d6fe025ccc38] Pending
	I1009 19:02:58.604500  296772 system_pods.go:89] "registry-66898fdd98-d8jgl" [d398f51d-a918-4ce0-89c9-47064bd1ae01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 19:02:58.604513  296772 system_pods.go:89] "registry-creds-764b6fb674-gq9vn" [bbaa910d-1ec1-4260-9cf0-961ed5abd1c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 19:02:58.604518  296772 system_pods.go:89] "registry-proxy-q9p6k" [ae33fd9b-bb98-4d1b-9150-ab438ca12680] Pending
	I1009 19:02:58.604522  296772 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jp7nw" [56d55ae5-a807-484f-868d-0d3f3d1b14f6] Pending
	I1009 19:02:58.604526  296772 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txqvb" [89dc3810-4bbb-4414-91c4-558f2d3651fd] Pending
	I1009 19:02:58.604540  296772 system_pods.go:89] "storage-provisioner" [eb0bf8bf-b888-410e-86f0-da0dec609732] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:02:58.604556  296772 retry.go:31] will retry after 311.792981ms: missing components: kube-dns
	I1009 19:02:58.605295  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:58.767371  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:58.769163  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:58.933628  296772 system_pods.go:86] 19 kube-system pods found
	I1009 19:02:58.933669  296772 system_pods.go:89] "coredns-66bc5c9577-dm266" [2bf7787a-2738-43b9-8632-2b4157093789] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:02:58.933676  296772 system_pods.go:89] "csi-hostpath-attacher-0" [d9525068-4ed2-4fb0-a039-6768fd5cb26d] Pending
	I1009 19:02:58.933684  296772 system_pods.go:89] "csi-hostpath-resizer-0" [fe8c3320-7341-44ca-a991-ceaa16169a16] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 19:02:58.933688  296772 system_pods.go:89] "csi-hostpathplugin-4b7rw" [5d573ce2-a134-413f-bb7e-939b263f86b2] Pending
	I1009 19:02:58.933693  296772 system_pods.go:89] "etcd-addons-999657" [e291d5e1-16ff-4e53-a970-58bf8afdf50c] Running
	I1009 19:02:58.933698  296772 system_pods.go:89] "kindnet-rztm2" [cbd574f9-584b-4118-ac18-abf4a715e249] Running
	I1009 19:02:58.933703  296772 system_pods.go:89] "kube-apiserver-addons-999657" [b8c9a6f4-5ae1-4faa-93f7-41c4f1100242] Running
	I1009 19:02:58.933707  296772 system_pods.go:89] "kube-controller-manager-addons-999657" [2e50195e-1447-4d4f-9ac4-c40cd15dcd11] Running
	I1009 19:02:58.933719  296772 system_pods.go:89] "kube-ingress-dns-minikube" [a0996862-e9b2-4ebd-9384-3018e153d32b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 19:02:58.933724  296772 system_pods.go:89] "kube-proxy-jcwfl" [07e1a1bf-5df2-4e42-8302-7d69acb08479] Running
	I1009 19:02:58.933736  296772 system_pods.go:89] "kube-scheduler-addons-999657" [c297d383-d2f4-4f34-82f6-76d68291ccbf] Running
	I1009 19:02:58.933743  296772 system_pods.go:89] "metrics-server-85b7d694d7-qgbgn" [1b9f013c-1ebf-4d60-b677-f20de508376a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 19:02:58.933757  296772 system_pods.go:89] "nvidia-device-plugin-daemonset-4lmwx" [2cd943cc-3d6e-418d-ab07-d6fe025ccc38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 19:02:58.933764  296772 system_pods.go:89] "registry-66898fdd98-d8jgl" [d398f51d-a918-4ce0-89c9-47064bd1ae01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 19:02:58.933776  296772 system_pods.go:89] "registry-creds-764b6fb674-gq9vn" [bbaa910d-1ec1-4260-9cf0-961ed5abd1c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 19:02:58.933781  296772 system_pods.go:89] "registry-proxy-q9p6k" [ae33fd9b-bb98-4d1b-9150-ab438ca12680] Pending
	I1009 19:02:58.933787  296772 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jp7nw" [56d55ae5-a807-484f-868d-0d3f3d1b14f6] Pending
	I1009 19:02:58.933803  296772 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txqvb" [89dc3810-4bbb-4414-91c4-558f2d3651fd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 19:02:58.933810  296772 system_pods.go:89] "storage-provisioner" [eb0bf8bf-b888-410e-86f0-da0dec609732] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:02:58.933829  296772 retry.go:31] will retry after 235.971577ms: missing components: kube-dns
	I1009 19:02:59.010521  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:59.110398  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:59.213154  296772 system_pods.go:86] 19 kube-system pods found
	I1009 19:02:59.213193  296772 system_pods.go:89] "coredns-66bc5c9577-dm266" [2bf7787a-2738-43b9-8632-2b4157093789] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:02:59.213207  296772 system_pods.go:89] "csi-hostpath-attacher-0" [d9525068-4ed2-4fb0-a039-6768fd5cb26d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 19:02:59.213217  296772 system_pods.go:89] "csi-hostpath-resizer-0" [fe8c3320-7341-44ca-a991-ceaa16169a16] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 19:02:59.213226  296772 system_pods.go:89] "csi-hostpathplugin-4b7rw" [5d573ce2-a134-413f-bb7e-939b263f86b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1009 19:02:59.213235  296772 system_pods.go:89] "etcd-addons-999657" [e291d5e1-16ff-4e53-a970-58bf8afdf50c] Running
	I1009 19:02:59.213244  296772 system_pods.go:89] "kindnet-rztm2" [cbd574f9-584b-4118-ac18-abf4a715e249] Running
	I1009 19:02:59.213249  296772 system_pods.go:89] "kube-apiserver-addons-999657" [b8c9a6f4-5ae1-4faa-93f7-41c4f1100242] Running
	I1009 19:02:59.213260  296772 system_pods.go:89] "kube-controller-manager-addons-999657" [2e50195e-1447-4d4f-9ac4-c40cd15dcd11] Running
	I1009 19:02:59.213267  296772 system_pods.go:89] "kube-ingress-dns-minikube" [a0996862-e9b2-4ebd-9384-3018e153d32b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 19:02:59.213277  296772 system_pods.go:89] "kube-proxy-jcwfl" [07e1a1bf-5df2-4e42-8302-7d69acb08479] Running
	I1009 19:02:59.213282  296772 system_pods.go:89] "kube-scheduler-addons-999657" [c297d383-d2f4-4f34-82f6-76d68291ccbf] Running
	I1009 19:02:59.213290  296772 system_pods.go:89] "metrics-server-85b7d694d7-qgbgn" [1b9f013c-1ebf-4d60-b677-f20de508376a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 19:02:59.213301  296772 system_pods.go:89] "nvidia-device-plugin-daemonset-4lmwx" [2cd943cc-3d6e-418d-ab07-d6fe025ccc38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 19:02:59.213308  296772 system_pods.go:89] "registry-66898fdd98-d8jgl" [d398f51d-a918-4ce0-89c9-47064bd1ae01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 19:02:59.213317  296772 system_pods.go:89] "registry-creds-764b6fb674-gq9vn" [bbaa910d-1ec1-4260-9cf0-961ed5abd1c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 19:02:59.213323  296772 system_pods.go:89] "registry-proxy-q9p6k" [ae33fd9b-bb98-4d1b-9150-ab438ca12680] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 19:02:59.213329  296772 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jp7nw" [56d55ae5-a807-484f-868d-0d3f3d1b14f6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 19:02:59.213338  296772 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txqvb" [89dc3810-4bbb-4414-91c4-558f2d3651fd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 19:02:59.213348  296772 system_pods.go:89] "storage-provisioner" [eb0bf8bf-b888-410e-86f0-da0dec609732] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 19:02:59.213364  296772 retry.go:31] will retry after 342.914299ms: missing components: kube-dns
	I1009 19:02:59.312357  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:59.312539  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:02:59.504871  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:02:59.561278  296772 system_pods.go:86] 19 kube-system pods found
	I1009 19:02:59.561309  296772 system_pods.go:89] "coredns-66bc5c9577-dm266" [2bf7787a-2738-43b9-8632-2b4157093789] Running
	I1009 19:02:59.561319  296772 system_pods.go:89] "csi-hostpath-attacher-0" [d9525068-4ed2-4fb0-a039-6768fd5cb26d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 19:02:59.561327  296772 system_pods.go:89] "csi-hostpath-resizer-0" [fe8c3320-7341-44ca-a991-ceaa16169a16] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 19:02:59.561335  296772 system_pods.go:89] "csi-hostpathplugin-4b7rw" [5d573ce2-a134-413f-bb7e-939b263f86b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1009 19:02:59.561340  296772 system_pods.go:89] "etcd-addons-999657" [e291d5e1-16ff-4e53-a970-58bf8afdf50c] Running
	I1009 19:02:59.561344  296772 system_pods.go:89] "kindnet-rztm2" [cbd574f9-584b-4118-ac18-abf4a715e249] Running
	I1009 19:02:59.561353  296772 system_pods.go:89] "kube-apiserver-addons-999657" [b8c9a6f4-5ae1-4faa-93f7-41c4f1100242] Running
	I1009 19:02:59.561359  296772 system_pods.go:89] "kube-controller-manager-addons-999657" [2e50195e-1447-4d4f-9ac4-c40cd15dcd11] Running
	I1009 19:02:59.561366  296772 system_pods.go:89] "kube-ingress-dns-minikube" [a0996862-e9b2-4ebd-9384-3018e153d32b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 19:02:59.561375  296772 system_pods.go:89] "kube-proxy-jcwfl" [07e1a1bf-5df2-4e42-8302-7d69acb08479] Running
	I1009 19:02:59.561380  296772 system_pods.go:89] "kube-scheduler-addons-999657" [c297d383-d2f4-4f34-82f6-76d68291ccbf] Running
	I1009 19:02:59.561388  296772 system_pods.go:89] "metrics-server-85b7d694d7-qgbgn" [1b9f013c-1ebf-4d60-b677-f20de508376a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 19:02:59.561399  296772 system_pods.go:89] "nvidia-device-plugin-daemonset-4lmwx" [2cd943cc-3d6e-418d-ab07-d6fe025ccc38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 19:02:59.561405  296772 system_pods.go:89] "registry-66898fdd98-d8jgl" [d398f51d-a918-4ce0-89c9-47064bd1ae01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 19:02:59.561416  296772 system_pods.go:89] "registry-creds-764b6fb674-gq9vn" [bbaa910d-1ec1-4260-9cf0-961ed5abd1c8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 19:02:59.561421  296772 system_pods.go:89] "registry-proxy-q9p6k" [ae33fd9b-bb98-4d1b-9150-ab438ca12680] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 19:02:59.561434  296772 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jp7nw" [56d55ae5-a807-484f-868d-0d3f3d1b14f6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 19:02:59.561452  296772 system_pods.go:89] "snapshot-controller-7d9fbc56b8-txqvb" [89dc3810-4bbb-4414-91c4-558f2d3651fd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 19:02:59.561462  296772 system_pods.go:89] "storage-provisioner" [eb0bf8bf-b888-410e-86f0-da0dec609732] Running
	I1009 19:02:59.561472  296772 system_pods.go:126] duration metric: took 993.628633ms to wait for k8s-apps to be running ...
	I1009 19:02:59.561485  296772 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:02:59.561543  296772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:02:59.576311  296772 system_svc.go:56] duration metric: took 14.817504ms WaitForService to wait for kubelet
	I1009 19:02:59.576341  296772 kubeadm.go:586] duration metric: took 42.703881644s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:02:59.576360  296772 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:02:59.580568  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:02:59.581245  296772 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:02:59.581274  296772 node_conditions.go:123] node cpu capacity is 2
	I1009 19:02:59.581287  296772 node_conditions.go:105] duration metric: took 4.921468ms to run NodePressure ...
	I1009 19:02:59.581300  296772 start.go:242] waiting for startup goroutines ...
	I1009 19:02:59.734413  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:02:59.743034  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:00.005762  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:00.081352  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:00.247940  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:00.249361  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:00.507381  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:00.580189  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:00.741689  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:00.742206  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:01.006194  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:01.106710  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:01.234068  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:01.236612  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:01.506638  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:01.580440  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:01.735404  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:01.743414  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:02.007084  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:02.079737  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:02.234326  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:02.236624  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:02.505175  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:02.579344  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:02.734862  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:02.742557  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:03.005169  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:03.079494  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:03.235072  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:03.237516  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:03.505586  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:03.579945  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:03.742914  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:03.744178  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:04.006601  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:04.079951  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:04.234012  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:04.236432  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:04.277716  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:03:04.505344  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:04.578982  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:04.740078  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:04.742147  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:05.006015  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:05.079853  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:05.235047  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:05.237342  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:05.317517  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.039761002s)
	W1009 19:03:05.317559  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:05.317578  296772 retry.go:31] will retry after 13.628467829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:05.511442  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:05.578994  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:05.736983  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:05.739046  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:06.006194  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:06.079617  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:06.237174  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:06.239134  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:06.505573  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:06.579832  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:06.734373  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:06.737041  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:07.006276  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:07.079603  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:07.236406  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:07.238629  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:07.505132  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:07.606004  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:07.740394  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:07.740838  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:08.006310  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:08.080398  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:08.236102  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:08.238526  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:08.506578  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:08.586785  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:08.736376  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:08.738138  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:09.008241  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:09.079967  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:09.235559  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:09.238639  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:09.506075  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:09.583900  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:09.738282  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:09.739900  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:10.005974  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:10.105947  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:10.234219  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:10.236144  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:10.505614  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:10.584592  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:10.741447  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:10.743204  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:11.005173  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:11.079438  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:11.234650  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:11.237185  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:11.506821  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:11.579716  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:11.734861  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:11.743685  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:12.005843  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:12.079285  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:12.237945  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:12.238756  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:12.505965  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:12.579304  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:12.737151  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:12.742511  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:13.005439  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:13.080595  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:13.235238  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:13.238224  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:13.506572  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:13.607121  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:13.739614  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:13.741932  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:14.006631  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:14.106476  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:14.234893  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:14.237247  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:14.506406  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:14.579881  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:14.738640  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:14.743240  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:15.005796  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:15.104889  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:15.235715  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:15.236980  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:15.506428  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:15.579373  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:15.739684  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:15.740171  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:16.006141  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:16.079360  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:16.237127  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:16.239115  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:16.505964  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:16.579293  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:16.734963  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:16.737073  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:17.005889  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:17.079043  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:17.236774  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:17.239577  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:17.505240  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:17.579051  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:17.735166  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:17.737424  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:18.004841  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:18.079249  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:18.235614  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:18.237376  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:18.504955  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:18.579280  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:18.734738  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:18.737025  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:18.946306  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:03:19.005982  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:19.078969  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:19.235000  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:19.241401  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:19.505535  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:19.579909  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:19.734674  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:19.737567  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 19:03:19.880643  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:19.880690  296772 retry.go:31] will retry after 31.146680689s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:20.005516  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:20.079859  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:20.235979  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:20.237713  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:20.505989  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:20.579685  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:20.734974  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:20.738703  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:21.006037  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:21.080176  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:21.235112  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:21.238406  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:21.505847  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:21.579361  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:21.735215  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:21.737717  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:22.005308  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:22.079772  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:22.234084  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:22.236625  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:22.514909  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:22.580012  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:22.734681  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:22.737544  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:23.005272  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:23.079660  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:23.235512  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:23.238686  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:23.506184  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:23.579713  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:23.734784  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:23.738576  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:24.006024  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:24.079334  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:24.235358  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:24.238732  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:24.505606  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:24.579784  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:24.734977  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:24.742510  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:25.005423  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:25.080071  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:25.235487  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:25.238297  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:25.506705  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:25.578959  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:25.739787  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:25.742261  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:26.005544  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:26.080158  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:26.235371  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:26.238256  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:26.506806  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:26.580762  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:26.733920  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:26.739206  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:27.006339  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:27.079592  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:27.234644  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:27.236885  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:27.506147  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:27.578947  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:27.734675  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:27.743827  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:28.006460  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:28.080404  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:28.234392  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:28.236253  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:28.505284  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:28.579708  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:28.739298  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:28.746115  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:29.005650  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:29.079417  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:29.234472  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:29.237447  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:29.505673  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:29.578862  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:29.733935  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:29.736268  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:30.008336  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:30.083171  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:30.234830  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:30.238228  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:30.505761  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:30.578925  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:30.739308  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:30.744894  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:31.005614  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:31.106489  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:31.236057  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:31.237501  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:31.506350  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:31.606267  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:31.738312  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:31.738575  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:32.004817  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:32.079673  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:32.234737  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:32.238044  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:32.505902  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:32.579195  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:32.752144  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:32.753176  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:33.005515  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:33.105890  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:33.234209  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:33.238034  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:33.505926  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:33.578990  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:33.742506  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:33.754177  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:34.005670  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:34.080791  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:34.234403  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:34.237072  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:34.506450  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:34.579421  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:34.739358  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:34.745294  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:35.005699  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:35.078990  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:35.234384  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:35.237245  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:35.506696  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:35.579029  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:35.738340  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:35.740689  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:36.005277  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:36.079399  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:36.239655  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:36.240750  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:36.508777  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:36.608284  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:36.749786  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:36.755504  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:37.006031  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:37.079270  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:37.234913  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:37.237692  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:37.506051  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:37.579784  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:37.737605  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:37.742761  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:38.005627  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:38.106250  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:38.234475  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:38.236811  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:38.506264  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:38.579432  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:38.737886  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:38.738038  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:39.006094  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:39.079808  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:39.235279  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:39.237235  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:39.506061  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:39.579349  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:39.736168  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:39.737912  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:40.005464  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:40.080333  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:40.234461  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:40.236667  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:40.505547  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:40.579353  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:40.735133  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:40.738637  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:41.006306  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:41.106525  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:41.237033  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:41.245806  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:41.506716  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:41.606486  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:41.743537  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:41.744779  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:42.005453  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:42.079554  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:42.235268  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:42.238540  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:42.505989  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:42.579025  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:42.740468  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:42.745565  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:43.005820  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:43.079270  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:43.235072  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:43.237696  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:43.506633  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:43.606338  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:43.789457  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:43.791397  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:44.005905  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:44.079784  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:44.235521  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:44.242091  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:44.506355  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:44.580164  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:44.741996  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:44.742396  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:45.012391  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:45.108851  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:45.238680  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:45.239038  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:45.507358  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:45.579677  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:45.735928  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:45.758188  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:46.019599  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:46.080125  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:46.237321  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:46.239837  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:46.505708  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:46.578943  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:46.746444  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:46.747889  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:47.006628  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:47.078702  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:47.235828  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:47.237615  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:47.505764  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:47.579081  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:47.741735  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:47.743491  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 19:03:48.006411  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:48.079486  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:48.235311  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:48.239083  296772 kapi.go:107] duration metric: took 1m25.005255973s to wait for kubernetes.io/minikube-addons=registry ...
	I1009 19:03:48.506048  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:48.579358  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:48.740175  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:49.006246  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:49.079099  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:49.234332  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:49.505479  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:49.580023  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:49.738286  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:50.005201  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:50.079709  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:50.234277  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:50.505506  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:50.580343  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:50.734850  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:51.005036  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:51.028335  296772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 19:03:51.086442  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:51.235543  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:51.504875  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:51.579596  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:51.735656  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:52.007281  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:52.080164  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:52.234058  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:52.280483  296772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.252110986s)
	W1009 19:03:52.280577  296772 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:52.280707  296772 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:03:52.510999  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:52.591940  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:52.734088  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:53.005849  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:53.079075  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:53.235439  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:53.505585  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:53.579912  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:53.746385  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:54.007804  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:54.079003  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:54.234445  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:54.506236  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:54.579839  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:54.739357  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:55.005722  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:55.106214  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:55.234362  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:55.507201  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:55.579552  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:55.740233  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:56.008124  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:56.107903  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:56.237458  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:56.508961  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:56.579294  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:56.740532  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:57.007473  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:57.080747  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:57.236388  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:57.506626  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:57.579733  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:57.805557  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:58.026032  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:58.079500  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:58.234538  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:58.505198  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:58.579028  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:58.734793  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:59.008149  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:59.082852  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:59.233789  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:03:59.505458  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:03:59.579622  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:03:59.738953  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:00.012121  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 19:04:00.095005  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:00.264125  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:00.511225  296772 kapi.go:107] duration metric: took 1m37.009672788s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1009 19:04:00.580046  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:00.740700  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:01.079032  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:01.234952  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:01.578840  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:01.740839  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:02.079361  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:02.244320  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:02.579634  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:02.738250  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:03.079595  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:03.234023  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:03.579323  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:03.735634  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:04.080766  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:04.235367  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:04.579468  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:04.740269  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:05.080517  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:05.235394  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:05.578838  296772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 19:04:05.738322  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:06.103209  296772 kapi.go:107] duration metric: took 1m39.527490314s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1009 19:04:06.107256  296772 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-999657 cluster.
	I1009 19:04:06.110256  296772 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1009 19:04:06.113388  296772 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1009 19:04:06.235259  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:06.738130  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:07.234522  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:07.738577  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:08.234591  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:08.739771  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:09.234522  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:09.748860  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:10.235260  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:10.735027  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:11.235284  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:11.745271  296772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 19:04:12.234114  296772 kapi.go:107] duration metric: took 1m49.003349416s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1009 19:04:12.237486  296772 out.go:179] * Enabled addons: registry-creds, storage-provisioner, amd-gpu-device-plugin, ingress-dns, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1009 19:04:12.240510  296772 addons.go:514] duration metric: took 1m55.367572673s for enable addons: enabled=[registry-creds storage-provisioner amd-gpu-device-plugin ingress-dns nvidia-device-plugin cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1009 19:04:12.240575  296772 start.go:247] waiting for cluster config update ...
	I1009 19:04:12.240599  296772 start.go:256] writing updated cluster config ...
	I1009 19:04:12.240911  296772 ssh_runner.go:195] Run: rm -f paused
	I1009 19:04:12.245184  296772 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:04:12.334482  296772 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dm266" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:12.341009  296772 pod_ready.go:94] pod "coredns-66bc5c9577-dm266" is "Ready"
	I1009 19:04:12.341038  296772 pod_ready.go:86] duration metric: took 6.525573ms for pod "coredns-66bc5c9577-dm266" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:12.343630  296772 pod_ready.go:83] waiting for pod "etcd-addons-999657" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:12.348471  296772 pod_ready.go:94] pod "etcd-addons-999657" is "Ready"
	I1009 19:04:12.348502  296772 pod_ready.go:86] duration metric: took 4.798744ms for pod "etcd-addons-999657" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:12.350946  296772 pod_ready.go:83] waiting for pod "kube-apiserver-addons-999657" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:12.355462  296772 pod_ready.go:94] pod "kube-apiserver-addons-999657" is "Ready"
	I1009 19:04:12.355526  296772 pod_ready.go:86] duration metric: took 4.555221ms for pod "kube-apiserver-addons-999657" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:12.357777  296772 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-999657" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:12.649136  296772 pod_ready.go:94] pod "kube-controller-manager-addons-999657" is "Ready"
	I1009 19:04:12.649213  296772 pod_ready.go:86] duration metric: took 291.409172ms for pod "kube-controller-manager-addons-999657" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:12.849681  296772 pod_ready.go:83] waiting for pod "kube-proxy-jcwfl" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:13.248885  296772 pod_ready.go:94] pod "kube-proxy-jcwfl" is "Ready"
	I1009 19:04:13.248912  296772 pod_ready.go:86] duration metric: took 399.20345ms for pod "kube-proxy-jcwfl" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:13.449222  296772 pod_ready.go:83] waiting for pod "kube-scheduler-addons-999657" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:13.849961  296772 pod_ready.go:94] pod "kube-scheduler-addons-999657" is "Ready"
	I1009 19:04:13.849993  296772 pod_ready.go:86] duration metric: took 400.741013ms for pod "kube-scheduler-addons-999657" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:04:13.850007  296772 pod_ready.go:40] duration metric: took 1.604793616s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:04:13.910797  296772 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:04:13.913990  296772 out.go:179] * Done! kubectl is now configured to use "addons-999657" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:04:12 addons-999657 crio[832]: time="2025-10-09T19:04:12.163485362Z" level=info msg="Stopped pod sandbox (already stopped): 752a4525eaf1bc068807ad12e5f24ab43f49763d8715db7accacef73875d5639" id=0c3bef52-a840-4a83-b32a-1f2bba07fe4a name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:04:12 addons-999657 crio[832]: time="2025-10-09T19:04:12.166222887Z" level=info msg="Removing pod sandbox: 752a4525eaf1bc068807ad12e5f24ab43f49763d8715db7accacef73875d5639" id=9ab5f633-b465-4678-a9a6-1a251f6d1e1c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:04:12 addons-999657 crio[832]: time="2025-10-09T19:04:12.172873508Z" level=info msg="Removed pod sandbox: 752a4525eaf1bc068807ad12e5f24ab43f49763d8715db7accacef73875d5639" id=9ab5f633-b465-4678-a9a6-1a251f6d1e1c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:04:14 addons-999657 crio[832]: time="2025-10-09T19:04:14.938525939Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a11f5ddd-b3d2-4066-b5fb-1b6081124cba name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:04:14 addons-999657 crio[832]: time="2025-10-09T19:04:14.938588122Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:04:14 addons-999657 crio[832]: time="2025-10-09T19:04:14.945816783Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bc13dece9246766e102cf4475571f3def9690526cf01e7e3752f98457212a356 UID:ca3e136e-233d-4e14-a69e-e23a77e22510 NetNS:/var/run/netns/2af63954-73e6-4871-bef8-06373bc705b9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012db98}] Aliases:map[]}"
	Oct 09 19:04:14 addons-999657 crio[832]: time="2025-10-09T19:04:14.946008994Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 09 19:04:14 addons-999657 crio[832]: time="2025-10-09T19:04:14.960042763Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bc13dece9246766e102cf4475571f3def9690526cf01e7e3752f98457212a356 UID:ca3e136e-233d-4e14-a69e-e23a77e22510 NetNS:/var/run/netns/2af63954-73e6-4871-bef8-06373bc705b9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012db98}] Aliases:map[]}"
	Oct 09 19:04:14 addons-999657 crio[832]: time="2025-10-09T19:04:14.960208703Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 09 19:04:14 addons-999657 crio[832]: time="2025-10-09T19:04:14.963234296Z" level=info msg="Ran pod sandbox bc13dece9246766e102cf4475571f3def9690526cf01e7e3752f98457212a356 with infra container: default/busybox/POD" id=a11f5ddd-b3d2-4066-b5fb-1b6081124cba name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:04:14 addons-999657 crio[832]: time="2025-10-09T19:04:14.966866463Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=729abbac-3a71-4ba9-a5a8-eaa77496e893 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:04:14 addons-999657 crio[832]: time="2025-10-09T19:04:14.967040665Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=729abbac-3a71-4ba9-a5a8-eaa77496e893 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:04:14 addons-999657 crio[832]: time="2025-10-09T19:04:14.967111932Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=729abbac-3a71-4ba9-a5a8-eaa77496e893 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:04:14 addons-999657 crio[832]: time="2025-10-09T19:04:14.969655572Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fcc29a77-d6c1-4d11-8ce5-7d7e85614503 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:04:14 addons-999657 crio[832]: time="2025-10-09T19:04:14.973876484Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 09 19:04:17 addons-999657 crio[832]: time="2025-10-09T19:04:17.185362293Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=fcc29a77-d6c1-4d11-8ce5-7d7e85614503 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:04:17 addons-999657 crio[832]: time="2025-10-09T19:04:17.18626372Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b37e5e36-8ccb-4c22-9da4-dd00214717dc name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:04:17 addons-999657 crio[832]: time="2025-10-09T19:04:17.188104627Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ed6e7018-5f1f-478c-9312-845334551f2e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:04:17 addons-999657 crio[832]: time="2025-10-09T19:04:17.196901834Z" level=info msg="Creating container: default/busybox/busybox" id=306fcfd8-798b-4a54-95cb-da171d144780 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:04:17 addons-999657 crio[832]: time="2025-10-09T19:04:17.197768587Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:04:17 addons-999657 crio[832]: time="2025-10-09T19:04:17.204551964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:04:17 addons-999657 crio[832]: time="2025-10-09T19:04:17.20524413Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:04:17 addons-999657 crio[832]: time="2025-10-09T19:04:17.225027576Z" level=info msg="Created container a993900b2baee6f2a66687effb79d48b42b82dc6628680277b476abea1c5c2a3: default/busybox/busybox" id=306fcfd8-798b-4a54-95cb-da171d144780 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:04:17 addons-999657 crio[832]: time="2025-10-09T19:04:17.226381526Z" level=info msg="Starting container: a993900b2baee6f2a66687effb79d48b42b82dc6628680277b476abea1c5c2a3" id=eae510fe-2cf4-49bf-a44a-301802c1f572 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:04:17 addons-999657 crio[832]: time="2025-10-09T19:04:17.228270283Z" level=info msg="Started container" PID=5061 containerID=a993900b2baee6f2a66687effb79d48b42b82dc6628680277b476abea1c5c2a3 description=default/busybox/busybox id=eae510fe-2cf4-49bf-a44a-301802c1f572 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc13dece9246766e102cf4475571f3def9690526cf01e7e3752f98457212a356
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	a993900b2baee       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   bc13dece92467       busybox                                    default
	a1c43a64d2cf0       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             14 seconds ago       Running             controller                               0                   da6bc68cdeb44       ingress-nginx-controller-9cc49f96f-24gzc   ingress-nginx
	917ad92fcb15d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 20 seconds ago       Running             gcp-auth                                 0                   70e3bee7204a5       gcp-auth-78565c9fb4-2hmqj                  gcp-auth
	50e1747ecacea       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          26 seconds ago       Running             csi-snapshotter                          0                   f85504d407cce       csi-hostpathplugin-4b7rw                   kube-system
	3f0053d1e02ad       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          28 seconds ago       Running             csi-provisioner                          0                   f85504d407cce       csi-hostpathplugin-4b7rw                   kube-system
	763d4ca0038d6       c67c707f59d87e1add5896e856d3ed36fbff2a778620f70d33b799e0541a77e3                                                                             29 seconds ago       Exited              patch                                    3                   09095f1cd8dc7       ingress-nginx-admission-patch-s9hrl        ingress-nginx
	4e9a584f93742       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            30 seconds ago       Running             liveness-probe                           0                   f85504d407cce       csi-hostpathplugin-4b7rw                   kube-system
	f2087bf38944f       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           31 seconds ago       Running             hostpath                                 0                   f85504d407cce       csi-hostpathplugin-4b7rw                   kube-system
	93ca74439d1e3       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                32 seconds ago       Running             node-driver-registrar                    0                   f85504d407cce       csi-hostpathplugin-4b7rw                   kube-system
	b544dfbf81fb5       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            34 seconds ago       Running             gadget                                   0                   5184138858d43       gadget-fh5x6                               gadget
	4011ef25cebcc       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              38 seconds ago       Running             registry-proxy                           0                   ac1610714c97d       registry-proxy-q9p6k                       kube-system
	859a72eb5676e       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      42 seconds ago       Running             volume-snapshot-controller               0                   63d6fe9d487fc       snapshot-controller-7d9fbc56b8-txqvb       kube-system
	bb893c39a97db       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        42 seconds ago       Running             metrics-server                           0                   439e408f5be0c       metrics-server-85b7d694d7-qgbgn            kube-system
	a9b5e7a178bf7       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      44 seconds ago       Running             volume-snapshot-controller               0                   cb80d74ddca30       snapshot-controller-7d9fbc56b8-jp7nw       kube-system
	60de2ccc28f1d       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             45 seconds ago       Running             local-path-provisioner                   0                   b698a08011d68       local-path-provisioner-648f6765c9-mnq45    local-path-storage
	cdcd01c9f8f42       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             46 seconds ago       Running             csi-attacher                             0                   f37d469fb0976       csi-hostpath-attacher-0                    kube-system
	39a52fb8859c2       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              48 seconds ago       Running             csi-resizer                              0                   d309b7867c2b5       csi-hostpath-resizer-0                     kube-system
	c0031a36724df       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   49 seconds ago       Exited              create                                   0                   27f9afb10c2bb       ingress-nginx-admission-create-22c9r       ingress-nginx
	7b7dc9732ce4b       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           50 seconds ago       Running             registry                                 0                   5e085abbe31c7       registry-66898fdd98-d8jgl                  kube-system
	ec4db71d717dd       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   51 seconds ago       Running             csi-external-health-monitor-controller   0                   f85504d407cce       csi-hostpathplugin-4b7rw                   kube-system
	f7e6d7b389c66       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     53 seconds ago       Running             nvidia-device-plugin-ctr                 0                   48ccb92735f8f       nvidia-device-plugin-daemonset-4lmwx       kube-system
	f415e4df4a3cd       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   421bca8afe63c       yakd-dashboard-5ff678cb9-vn427             yakd-dashboard
	9313a51d10845       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   420265a83e041       cloud-spanner-emulator-86bd5cbb97-qbxnd    default
	fbc396505d84e       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   75a9985ea2fc1       kube-ingress-dns-minikube                  kube-system
	c8fc026ca1019       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   ffedf1ca53d34       storage-provisioner                        kube-system
	2823efa103e5e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   e2005c57356ba       coredns-66bc5c9577-dm266                   kube-system
	532259f4c5926       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   93085f5c2d9d7       kindnet-rztm2                              kube-system
	d859645864356       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   723a94b89d157       kube-proxy-jcwfl                           kube-system
	7fcbf1be4bdef       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   e2ee93c8b0fa8       kube-scheduler-addons-999657               kube-system
	09a19318421ae       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   ee77e8fc8c408       kube-controller-manager-addons-999657      kube-system
	aaa0ded06ea4b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   70b5692c7a7d3       etcd-addons-999657                         kube-system
	804d5a04697a7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   3db82d012a194       kube-apiserver-addons-999657               kube-system
	
	
	==> coredns [2823efa103e5ee38b792c909eaeee0c995e8a8302f5b0f522f6d786b3be0e7ba] <==
	[INFO] 10.244.0.18:57834 - 57196 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000187584s
	[INFO] 10.244.0.18:57834 - 15363 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00203596s
	[INFO] 10.244.0.18:57834 - 43245 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002002156s
	[INFO] 10.244.0.18:57834 - 53094 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000212977s
	[INFO] 10.244.0.18:57834 - 51411 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00034929s
	[INFO] 10.244.0.18:58324 - 10750 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00014711s
	[INFO] 10.244.0.18:58324 - 10280 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000102821s
	[INFO] 10.244.0.18:45191 - 43042 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091515s
	[INFO] 10.244.0.18:45191 - 42853 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074786s
	[INFO] 10.244.0.18:42364 - 35227 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085246s
	[INFO] 10.244.0.18:42364 - 34997 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000154626s
	[INFO] 10.244.0.18:35884 - 24363 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001292377s
	[INFO] 10.244.0.18:35884 - 24183 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001346118s
	[INFO] 10.244.0.18:44992 - 39097 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127517s
	[INFO] 10.244.0.18:44992 - 39310 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000151499s
	[INFO] 10.244.0.20:36500 - 25965 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000191407s
	[INFO] 10.244.0.20:57757 - 41543 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000259457s
	[INFO] 10.244.0.20:55331 - 13630 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00016302s
	[INFO] 10.244.0.20:57339 - 9710 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000095068s
	[INFO] 10.244.0.20:34063 - 59055 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000083688s
	[INFO] 10.244.0.20:33957 - 46577 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000103429s
	[INFO] 10.244.0.20:35186 - 25859 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002260936s
	[INFO] 10.244.0.20:38636 - 36177 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001712181s
	[INFO] 10.244.0.20:44409 - 26902 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002625323s
	[INFO] 10.244.0.20:58202 - 9238 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001881387s
	
	
	==> describe nodes <==
	Name:               addons-999657
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-999657
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=addons-999657
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_02_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-999657
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-999657"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:02:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-999657
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:04:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:04:14 +0000   Thu, 09 Oct 2025 19:02:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:04:14 +0000   Thu, 09 Oct 2025 19:02:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:04:14 +0000   Thu, 09 Oct 2025 19:02:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:04:14 +0000   Thu, 09 Oct 2025 19:02:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-999657
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 36212d5d1b0b470f9e6023029f3833c7
	  System UUID:                0cc2ca92-1fed-42ee-b02f-8480b3bcd288
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-86bd5cbb97-qbxnd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  gadget                      gadget-fh5x6                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  gcp-auth                    gcp-auth-78565c9fb4-2hmqj                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-24gzc    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m3s
	  kube-system                 coredns-66bc5c9577-dm266                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m9s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 csi-hostpathplugin-4b7rw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 etcd-addons-999657                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m14s
	  kube-system                 kindnet-rztm2                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m10s
	  kube-system                 kube-apiserver-addons-999657                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-controller-manager-addons-999657       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-jcwfl                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-scheduler-addons-999657                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 metrics-server-85b7d694d7-qgbgn             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m5s
	  kube-system                 nvidia-device-plugin-daemonset-4lmwx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 registry-66898fdd98-d8jgl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 registry-creds-764b6fb674-gq9vn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 registry-proxy-q9p6k                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 snapshot-controller-7d9fbc56b8-jp7nw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 snapshot-controller-7d9fbc56b8-txqvb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  local-path-storage          local-path-provisioner-648f6765c9-mnq45     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-vn427              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m7s   kube-proxy       
	  Normal   Starting                 2m14s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m14s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m14s  kubelet          Node addons-999657 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m14s  kubelet          Node addons-999657 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m14s  kubelet          Node addons-999657 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m10s  node-controller  Node addons-999657 event: Registered Node addons-999657 in Controller
	  Normal   NodeReady                89s    kubelet          Node addons-999657 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015195] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.531968] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036847] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.757016] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.932356] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 9 18:02] hrtimer: interrupt took 20603549 ns
	[Oct 9 18:59] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 9 19:02] overlayfs: idmapped layers are currently not supported
	[  +0.066862] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [aaa0ded06ea4b321c4f9a079d4cf69d526ba351445ed008be4734d67b7ea8524] <==
	{"level":"warn","ts":"2025-10-09T19:02:07.547782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.569989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.596665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.625361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.653260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.679176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.699581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.729725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.754157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.787289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.808114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.838184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.902912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.922704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.960010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:07.992842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:08.022206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:08.056940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:08.182941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:24.009382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:24.029176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:46.025604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:46.034831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:46.073204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:02:46.084419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46168","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [917ad92fcb15d86bdc24453a35ca0c69c4ecc2ec023a6d6a2473c909f6ec3660] <==
	2025/10/09 19:04:05 GCP Auth Webhook started!
	2025/10/09 19:04:14 Ready to marshal response ...
	2025/10/09 19:04:14 Ready to write response ...
	2025/10/09 19:04:14 Ready to marshal response ...
	2025/10/09 19:04:14 Ready to write response ...
	2025/10/09 19:04:14 Ready to marshal response ...
	2025/10/09 19:04:14 Ready to write response ...
	
	
	==> kernel <==
	 19:04:26 up  1:46,  0 user,  load average: 2.83, 2.75, 3.20
	Linux addons-999657 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [532259f4c5926820e3e18f689f80a1bc102631a6a0a05374223820ef91ec414f] <==
	E1009 19:02:47.923419       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 19:02:47.923419       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1009 19:02:47.923542       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1009 19:02:47.923602       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1009 19:02:49.523158       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:02:49.523190       1 metrics.go:72] Registering metrics
	I1009 19:02:49.523264       1 controller.go:711] "Syncing nftables rules"
	I1009 19:02:57.922922       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:02:57.922983       1 main.go:301] handling current node
	I1009 19:03:07.923357       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:03:07.923488       1 main.go:301] handling current node
	I1009 19:03:17.922357       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:03:17.922397       1 main.go:301] handling current node
	I1009 19:03:27.922805       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:03:27.922834       1 main.go:301] handling current node
	I1009 19:03:37.922783       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:03:37.922828       1 main.go:301] handling current node
	I1009 19:03:47.922284       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:03:47.922319       1 main.go:301] handling current node
	I1009 19:03:57.922734       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:03:57.922776       1 main.go:301] handling current node
	I1009 19:04:07.922746       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:04:07.922773       1 main.go:301] handling current node
	I1009 19:04:17.922192       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:04:17.922222       1 main.go:301] handling current node
	
	
	==> kube-apiserver [804d5a04697a7b5835636d98ba88b94561d9699443a8eadbbe90fd28d0b160cb] <==
	W1009 19:02:58.008105       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.148.126:443: connect: connection refused
	E1009 19:02:58.008140       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.148.126:443: connect: connection refused" logger="UnhandledError"
	W1009 19:02:58.155757       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.148.126:443: connect: connection refused
	E1009 19:02:58.156135       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.148.126:443: connect: connection refused" logger="UnhandledError"
	W1009 19:03:22.815739       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 19:03:22.815786       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1009 19:03:22.815800       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1009 19:03:22.816966       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 19:03:22.817038       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1009 19:03:22.817052       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1009 19:03:55.917878       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.231.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.231.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.231.99:443: connect: connection refused" logger="UnhandledError"
	W1009 19:03:55.917964       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 19:03:55.918028       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1009 19:03:55.918724       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.231.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.231.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.231.99:443: connect: connection refused" logger="UnhandledError"
	E1009 19:03:55.924450       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.231.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.231.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.231.99:443: connect: connection refused" logger="UnhandledError"
	I1009 19:03:56.023610       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1009 19:04:23.881853       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46938: use of closed network connection
	E1009 19:04:24.125235       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46962: use of closed network connection
	E1009 19:04:24.264253       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46966: use of closed network connection
	
	
	==> kube-controller-manager [09a19318421aec51d9c6d371040aa4863198795c9d616ed4df00c26edc18b036] <==
	I1009 19:02:16.049698       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:02:16.049758       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1009 19:02:16.050030       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1009 19:02:16.050092       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 19:02:16.050344       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 19:02:16.050683       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1009 19:02:16.052406       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1009 19:02:16.053658       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 19:02:16.053672       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 19:02:16.053682       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 19:02:16.060064       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 19:02:16.061501       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1009 19:02:21.935296       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1009 19:02:46.012038       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 19:02:46.012214       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1009 19:02:46.012261       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1009 19:02:46.055660       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1009 19:02:46.060526       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1009 19:02:46.112881       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:02:46.161478       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:03:01.064904       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1009 19:03:16.119184       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 19:03:16.170842       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1009 19:03:46.127553       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 19:03:46.183088       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [d85964586435683cdf29db9f8d8e0fd3637c91ff01ef302bb910a1397cf75b01] <==
	I1009 19:02:17.961961       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:02:18.083211       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:02:18.194696       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:02:18.194734       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1009 19:02:18.194810       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:02:18.223149       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:02:18.223200       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:02:18.272059       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:02:18.272385       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:02:18.272404       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:02:18.278326       1 config.go:200] "Starting service config controller"
	I1009 19:02:18.278342       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:02:18.278359       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:02:18.278363       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:02:18.278373       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:02:18.278377       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:02:18.279145       1 config.go:309] "Starting node config controller"
	I1009 19:02:18.279153       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:02:18.279159       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:02:18.379484       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:02:18.379533       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:02:18.379569       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7fcbf1be4bdef0161b5efca4fb661fd9a8fddc41f80f6974b49d4a8bb8d17634] <==
	I1009 19:02:09.523247       1 serving.go:386] Generated self-signed cert in-memory
	W1009 19:02:10.836308       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 19:02:10.836922       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 19:02:10.836990       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 19:02:10.837022       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 19:02:10.857955       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 19:02:10.858054       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:02:10.861877       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 19:02:10.862687       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:02:10.862764       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:02:10.862820       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:02:10.963174       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:03:52 addons-999657 kubelet[1301]: I1009 19:03:52.910489    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-q9p6k" podStartSLOduration=6.331523945 podStartE2EDuration="54.910474418s" podCreationTimestamp="2025-10-09 19:02:58 +0000 UTC" firstStartedPulling="2025-10-09 19:02:59.190946674 +0000 UTC m=+47.199183983" lastFinishedPulling="2025-10-09 19:03:47.769897148 +0000 UTC m=+95.778134456" observedRunningTime="2025-10-09 19:03:47.881629727 +0000 UTC m=+95.889867061" watchObservedRunningTime="2025-10-09 19:03:52.910474418 +0000 UTC m=+100.918711842"
	Oct 09 19:03:56 addons-999657 kubelet[1301]: I1009 19:03:56.161618    1301 scope.go:117] "RemoveContainer" containerID="9171c5cdbe68f2e63d6f16624458e1c999c66582d99b664c91875755c69df3ea"
	Oct 09 19:03:56 addons-999657 kubelet[1301]: I1009 19:03:56.335546    1301 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 09 19:03:56 addons-999657 kubelet[1301]: I1009 19:03:56.335612    1301 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 09 19:03:56 addons-999657 kubelet[1301]: I1009 19:03:56.938497    1301 scope.go:117] "RemoveContainer" containerID="9171c5cdbe68f2e63d6f16624458e1c999c66582d99b664c91875755c69df3ea"
	Oct 09 19:03:58 addons-999657 kubelet[1301]: I1009 19:03:58.041949    1301 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hc9p\" (UniqueName: \"kubernetes.io/projected/4ea49fd1-c533-47aa-b150-a46754d06909-kube-api-access-6hc9p\") pod \"4ea49fd1-c533-47aa-b150-a46754d06909\" (UID: \"4ea49fd1-c533-47aa-b150-a46754d06909\") "
	Oct 09 19:03:58 addons-999657 kubelet[1301]: I1009 19:03:58.051135    1301 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea49fd1-c533-47aa-b150-a46754d06909-kube-api-access-6hc9p" (OuterVolumeSpecName: "kube-api-access-6hc9p") pod "4ea49fd1-c533-47aa-b150-a46754d06909" (UID: "4ea49fd1-c533-47aa-b150-a46754d06909"). InnerVolumeSpecName "kube-api-access-6hc9p". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 09 19:03:58 addons-999657 kubelet[1301]: I1009 19:03:58.143880    1301 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6hc9p\" (UniqueName: \"kubernetes.io/projected/4ea49fd1-c533-47aa-b150-a46754d06909-kube-api-access-6hc9p\") on node \"addons-999657\" DevicePath \"\""
	Oct 09 19:03:58 addons-999657 kubelet[1301]: I1009 19:03:58.991348    1301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09095f1cd8dc7643a98ffef5859f44eb7d13e6eb81c7f8e0dfa011a38e75a139"
	Oct 09 19:04:00 addons-999657 kubelet[1301]: I1009 19:04:00.018428    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-4b7rw" podStartSLOduration=1.475188819 podStartE2EDuration="1m2.018399243s" podCreationTimestamp="2025-10-09 19:02:58 +0000 UTC" firstStartedPulling="2025-10-09 19:02:58.783163524 +0000 UTC m=+46.791400841" lastFinishedPulling="2025-10-09 19:03:59.326373948 +0000 UTC m=+107.334611265" observedRunningTime="2025-10-09 19:04:00.016033238 +0000 UTC m=+108.024270563" watchObservedRunningTime="2025-10-09 19:04:00.018399243 +0000 UTC m=+108.026636568"
	Oct 09 19:04:02 addons-999657 kubelet[1301]: E1009 19:04:02.027728    1301 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 09 19:04:02 addons-999657 kubelet[1301]: E1009 19:04:02.027830    1301 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bbaa910d-1ec1-4260-9cf0-961ed5abd1c8-gcr-creds podName:bbaa910d-1ec1-4260-9cf0-961ed5abd1c8 nodeName:}" failed. No retries permitted until 2025-10-09 19:05:06.027812398 +0000 UTC m=+174.036049706 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/bbaa910d-1ec1-4260-9cf0-961ed5abd1c8-gcr-creds") pod "registry-creds-764b6fb674-gq9vn" (UID: "bbaa910d-1ec1-4260-9cf0-961ed5abd1c8") : secret "registry-creds-gcr" not found
	Oct 09 19:04:02 addons-999657 kubelet[1301]: W1009 19:04:02.258858    1301 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd/crio-70e3bee7204a5b6a1e816a3a0180a620627062f6b1f3d5f6f85119e7af2b4348 WatchSource:0}: Error finding container 70e3bee7204a5b6a1e816a3a0180a620627062f6b1f3d5f6f85119e7af2b4348: Status 404 returned error can't find the container with id 70e3bee7204a5b6a1e816a3a0180a620627062f6b1f3d5f6f85119e7af2b4348
	Oct 09 19:04:02 addons-999657 kubelet[1301]: W1009 19:04:02.369075    1301 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ecd6cd18f751718cb40377c19ed8fb91d99c6fd2c7932de2df67df8a9fb7b9bd/crio-da6bc68cdeb44422e4853f02985f95e0578786c017ada99d443bf3a6f93a3d15 WatchSource:0}: Error finding container da6bc68cdeb44422e4853f02985f95e0578786c017ada99d443bf3a6f93a3d15: Status 404 returned error can't find the container with id da6bc68cdeb44422e4853f02985f95e0578786c017ada99d443bf3a6f93a3d15
	Oct 09 19:04:06 addons-999657 kubelet[1301]: I1009 19:04:06.178439    1301 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a362b98e-f054-4c6a-94ff-a40aba718169" path="/var/lib/kubelet/pods/a362b98e-f054-4c6a-94ff-a40aba718169/volumes"
	Oct 09 19:04:08 addons-999657 kubelet[1301]: I1009 19:04:08.036111    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-2hmqj" podStartSLOduration=99.001720703 podStartE2EDuration="1m42.036087118s" podCreationTimestamp="2025-10-09 19:02:26 +0000 UTC" firstStartedPulling="2025-10-09 19:04:02.261542296 +0000 UTC m=+110.269779605" lastFinishedPulling="2025-10-09 19:04:05.295908711 +0000 UTC m=+113.304146020" observedRunningTime="2025-10-09 19:04:06.07959714 +0000 UTC m=+114.087834474" watchObservedRunningTime="2025-10-09 19:04:08.036087118 +0000 UTC m=+116.044324427"
	Oct 09 19:04:08 addons-999657 kubelet[1301]: I1009 19:04:08.166041    1301 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee723f53-0159-4972-a790-82a9d4ae0f90" path="/var/lib/kubelet/pods/ee723f53-0159-4972-a790-82a9d4ae0f90/volumes"
	Oct 09 19:04:12 addons-999657 kubelet[1301]: I1009 19:04:12.106066    1301 scope.go:117] "RemoveContainer" containerID="43b3dc605f9410fa7b1a3ff383c6a5b46e5e9e03d6d89213574acfa3f5f5cb7d"
	Oct 09 19:04:12 addons-999657 kubelet[1301]: I1009 19:04:12.122165    1301 scope.go:117] "RemoveContainer" containerID="ecba8ba7ec81a32e90bfb990f2043f4c65375aea497b156d1fa8f08228378a89"
	Oct 09 19:04:12 addons-999657 kubelet[1301]: E1009 19:04:12.265952    1301 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a4d052952b8a56eaba1786240d8d2c246c5f2930847f9fd6973d922a424fa054/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a4d052952b8a56eaba1786240d8d2c246c5f2930847f9fd6973d922a424fa054/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 19:04:12 addons-999657 kubelet[1301]: E1009 19:04:12.284398    1301 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/36f4e4dcc2c77e9ad2138b51c825781dc05b8a2821aee4c6683ccbaff06fe3f1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/36f4e4dcc2c77e9ad2138b51c825781dc05b8a2821aee4c6683ccbaff06fe3f1/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 19:04:14 addons-999657 kubelet[1301]: I1009 19:04:14.628714    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-24gzc" podStartSLOduration=102.410559435 podStartE2EDuration="1m51.628694087s" podCreationTimestamp="2025-10-09 19:02:23 +0000 UTC" firstStartedPulling="2025-10-09 19:04:02.372433344 +0000 UTC m=+110.380670653" lastFinishedPulling="2025-10-09 19:04:11.590567996 +0000 UTC m=+119.598805305" observedRunningTime="2025-10-09 19:04:12.108048448 +0000 UTC m=+120.116285773" watchObservedRunningTime="2025-10-09 19:04:14.628694087 +0000 UTC m=+122.636931404"
	Oct 09 19:04:14 addons-999657 kubelet[1301]: I1009 19:04:14.739904    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj728\" (UniqueName: \"kubernetes.io/projected/ca3e136e-233d-4e14-a69e-e23a77e22510-kube-api-access-pj728\") pod \"busybox\" (UID: \"ca3e136e-233d-4e14-a69e-e23a77e22510\") " pod="default/busybox"
	Oct 09 19:04:14 addons-999657 kubelet[1301]: I1009 19:04:14.740308    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ca3e136e-233d-4e14-a69e-e23a77e22510-gcp-creds\") pod \"busybox\" (UID: \"ca3e136e-233d-4e14-a69e-e23a77e22510\") " pod="default/busybox"
	Oct 09 19:04:23 addons-999657 kubelet[1301]: I1009 19:04:23.114942    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=6.895136192 podStartE2EDuration="9.11492355s" podCreationTimestamp="2025-10-09 19:04:14 +0000 UTC" firstStartedPulling="2025-10-09 19:04:14.967342664 +0000 UTC m=+122.975579981" lastFinishedPulling="2025-10-09 19:04:17.187130031 +0000 UTC m=+125.195367339" observedRunningTime="2025-10-09 19:04:18.131624218 +0000 UTC m=+126.139861527" watchObservedRunningTime="2025-10-09 19:04:23.11492355 +0000 UTC m=+131.123160867"
	
	
	==> storage-provisioner [c8fc026ca1019d8a0f4406f6cc4f8f68a03b36d38c792e26f09d7d78bf7ea9e3] <==
	W1009 19:04:01.675475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:03.679266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:03.687938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:05.691772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:05.698167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:07.701631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:07.707117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:09.710877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:09.720990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:11.729335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:11.749736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:13.752769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:13.757459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:15.760327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:15.765532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:17.768727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:17.773476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:19.777358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:19.784391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:21.787769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:21.792543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:23.796452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:23.804051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:25.808295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:04:25.815891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-999657 -n addons-999657
helpers_test.go:269: (dbg) Run:  kubectl --context addons-999657 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-22c9r ingress-nginx-admission-patch-s9hrl registry-creds-764b6fb674-gq9vn
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-999657 describe pod ingress-nginx-admission-create-22c9r ingress-nginx-admission-patch-s9hrl registry-creds-764b6fb674-gq9vn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-999657 describe pod ingress-nginx-admission-create-22c9r ingress-nginx-admission-patch-s9hrl registry-creds-764b6fb674-gq9vn: exit status 1 (87.612163ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-22c9r" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-s9hrl" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-gq9vn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-999657 describe pod ingress-nginx-admission-create-22c9r ingress-nginx-admission-patch-s9hrl registry-creds-764b6fb674-gq9vn: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-999657 addons disable headlamp --alsologtostderr -v=1: exit status 11 (259.692154ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:04:27.475047  303451 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:04:27.475885  303451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:27.475901  303451 out.go:374] Setting ErrFile to fd 2...
	I1009 19:04:27.475906  303451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:27.476196  303451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:04:27.476505  303451 mustload.go:65] Loading cluster: addons-999657
	I1009 19:04:27.476905  303451 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:27.476923  303451 addons.go:606] checking whether the cluster is paused
	I1009 19:04:27.477026  303451 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:27.477041  303451 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:04:27.477521  303451 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:04:27.494489  303451 ssh_runner.go:195] Run: systemctl --version
	I1009 19:04:27.494546  303451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:04:27.513087  303451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:04:27.619743  303451 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:04:27.619877  303451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:04:27.651158  303451 cri.go:89] found id: "50e1747ecacea77a5f93e1d46aa99c4cb1fbce08f8f2f546154db1f9be02c796"
	I1009 19:04:27.651187  303451 cri.go:89] found id: "3f0053d1e02ad19ca3e128caaadebe9c4c0975ba7cb365a25e3c6d52870a17f0"
	I1009 19:04:27.651194  303451 cri.go:89] found id: "4e9a584f93742f2382e823963bed9b224f3ccba7c95660eb1dbca3c6b9908b3f"
	I1009 19:04:27.651198  303451 cri.go:89] found id: "f2087bf38944fc739afaac1113f222793a13154e659f060c4cdece3a7fa73071"
	I1009 19:04:27.651201  303451 cri.go:89] found id: "93ca74439d1e37925a704ede2fce384f9f9489c7b96ead6d63884128a4d9b0d1"
	I1009 19:04:27.651205  303451 cri.go:89] found id: "4011ef25cebccd0072dd10b711fedb8be54cb74589db33f0e4a5e667873eed44"
	I1009 19:04:27.651209  303451 cri.go:89] found id: "859a72eb5676e02ccdbfc8116afe2d9c4f2283fc97f6e130eb77ba45fe1f2ddf"
	I1009 19:04:27.651212  303451 cri.go:89] found id: "bb893c39a97db27f01be58b5eec66390173c64aa6dbf5fcc501e526bd34e4f74"
	I1009 19:04:27.651215  303451 cri.go:89] found id: "a9b5e7a178bf7423b4d23385f8409bd2da8f1ec9e312f7a1c786a7b9f1ec78fe"
	I1009 19:04:27.651221  303451 cri.go:89] found id: "cdcd01c9f8f4271bde354d676a0d7b97cf89b90bcce19fbab3de17f21aebb44c"
	I1009 19:04:27.651224  303451 cri.go:89] found id: "39a52fb8859c2040cedab3dbdc0662ae79f7d3abba463258d2c504bf8830448b"
	I1009 19:04:27.651227  303451 cri.go:89] found id: "7b7dc9732ce4b2127334e1e0c5b92a0ae3fbb0d316e98281fb8f8e8269c4b998"
	I1009 19:04:27.651230  303451 cri.go:89] found id: "ec4db71d717ddecd989934884e42ed0846d635333858d9d800181bfa0530c564"
	I1009 19:04:27.651233  303451 cri.go:89] found id: "f7e6d7b389c66f70de1f9dfa7a02589e922c296513ed0a3835867069f4fa9db8"
	I1009 19:04:27.651241  303451 cri.go:89] found id: "fbc396505d84e35ea37081e17222da9738c8ff9edd4bf2e014fdb1bf99f6de56"
	I1009 19:04:27.651251  303451 cri.go:89] found id: "c8fc026ca1019d8a0f4406f6cc4f8f68a03b36d38c792e26f09d7d78bf7ea9e3"
	I1009 19:04:27.651255  303451 cri.go:89] found id: "2823efa103e5ee38b792c909eaeee0c995e8a8302f5b0f522f6d786b3be0e7ba"
	I1009 19:04:27.651259  303451 cri.go:89] found id: "532259f4c5926820e3e18f689f80a1bc102631a6a0a05374223820ef91ec414f"
	I1009 19:04:27.651261  303451 cri.go:89] found id: "d85964586435683cdf29db9f8d8e0fd3637c91ff01ef302bb910a1397cf75b01"
	I1009 19:04:27.651264  303451 cri.go:89] found id: "7fcbf1be4bdef0161b5efca4fb661fd9a8fddc41f80f6974b49d4a8bb8d17634"
	I1009 19:04:27.651269  303451 cri.go:89] found id: "09a19318421aec51d9c6d371040aa4863198795c9d616ed4df00c26edc18b036"
	I1009 19:04:27.651272  303451 cri.go:89] found id: "aaa0ded06ea4b321c4f9a079d4cf69d526ba351445ed008be4734d67b7ea8524"
	I1009 19:04:27.651274  303451 cri.go:89] found id: "804d5a04697a7b5835636d98ba88b94561d9699443a8eadbbe90fd28d0b160cb"
	I1009 19:04:27.651278  303451 cri.go:89] found id: ""
	I1009 19:04:27.651329  303451 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:04:27.667086  303451 out.go:203] 
	W1009 19:04:27.670027  303451 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:04:27.670051  303451 out.go:285] * 
	* 
	W1009 19:04:27.675082  303451 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:04:27.678059  303451 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-999657 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.14s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-qbxnd" [8d733f66-1068-46f9-9566-1a0b284d1f8c] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.010102357s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-999657 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (285.724857ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:04:45.987071  303919 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:04:45.987901  303919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:45.987944  303919 out.go:374] Setting ErrFile to fd 2...
	I1009 19:04:45.987969  303919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:45.988261  303919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:04:45.988608  303919 mustload.go:65] Loading cluster: addons-999657
	I1009 19:04:45.989018  303919 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:45.989063  303919 addons.go:606] checking whether the cluster is paused
	I1009 19:04:45.989237  303919 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:45.989276  303919 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:04:45.989770  303919 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:04:46.012362  303919 ssh_runner.go:195] Run: systemctl --version
	I1009 19:04:46.012418  303919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:04:46.037927  303919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:04:46.143798  303919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:04:46.143886  303919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:04:46.178405  303919 cri.go:89] found id: "50e1747ecacea77a5f93e1d46aa99c4cb1fbce08f8f2f546154db1f9be02c796"
	I1009 19:04:46.178440  303919 cri.go:89] found id: "3f0053d1e02ad19ca3e128caaadebe9c4c0975ba7cb365a25e3c6d52870a17f0"
	I1009 19:04:46.178446  303919 cri.go:89] found id: "4e9a584f93742f2382e823963bed9b224f3ccba7c95660eb1dbca3c6b9908b3f"
	I1009 19:04:46.178450  303919 cri.go:89] found id: "f2087bf38944fc739afaac1113f222793a13154e659f060c4cdece3a7fa73071"
	I1009 19:04:46.178453  303919 cri.go:89] found id: "93ca74439d1e37925a704ede2fce384f9f9489c7b96ead6d63884128a4d9b0d1"
	I1009 19:04:46.178458  303919 cri.go:89] found id: "4011ef25cebccd0072dd10b711fedb8be54cb74589db33f0e4a5e667873eed44"
	I1009 19:04:46.178461  303919 cri.go:89] found id: "859a72eb5676e02ccdbfc8116afe2d9c4f2283fc97f6e130eb77ba45fe1f2ddf"
	I1009 19:04:46.178465  303919 cri.go:89] found id: "bb893c39a97db27f01be58b5eec66390173c64aa6dbf5fcc501e526bd34e4f74"
	I1009 19:04:46.178468  303919 cri.go:89] found id: "a9b5e7a178bf7423b4d23385f8409bd2da8f1ec9e312f7a1c786a7b9f1ec78fe"
	I1009 19:04:46.178481  303919 cri.go:89] found id: "cdcd01c9f8f4271bde354d676a0d7b97cf89b90bcce19fbab3de17f21aebb44c"
	I1009 19:04:46.178488  303919 cri.go:89] found id: "39a52fb8859c2040cedab3dbdc0662ae79f7d3abba463258d2c504bf8830448b"
	I1009 19:04:46.178492  303919 cri.go:89] found id: "7b7dc9732ce4b2127334e1e0c5b92a0ae3fbb0d316e98281fb8f8e8269c4b998"
	I1009 19:04:46.178498  303919 cri.go:89] found id: "ec4db71d717ddecd989934884e42ed0846d635333858d9d800181bfa0530c564"
	I1009 19:04:46.178502  303919 cri.go:89] found id: "f7e6d7b389c66f70de1f9dfa7a02589e922c296513ed0a3835867069f4fa9db8"
	I1009 19:04:46.178505  303919 cri.go:89] found id: "fbc396505d84e35ea37081e17222da9738c8ff9edd4bf2e014fdb1bf99f6de56"
	I1009 19:04:46.178515  303919 cri.go:89] found id: "c8fc026ca1019d8a0f4406f6cc4f8f68a03b36d38c792e26f09d7d78bf7ea9e3"
	I1009 19:04:46.178523  303919 cri.go:89] found id: "2823efa103e5ee38b792c909eaeee0c995e8a8302f5b0f522f6d786b3be0e7ba"
	I1009 19:04:46.178528  303919 cri.go:89] found id: "532259f4c5926820e3e18f689f80a1bc102631a6a0a05374223820ef91ec414f"
	I1009 19:04:46.178531  303919 cri.go:89] found id: "d85964586435683cdf29db9f8d8e0fd3637c91ff01ef302bb910a1397cf75b01"
	I1009 19:04:46.178534  303919 cri.go:89] found id: "7fcbf1be4bdef0161b5efca4fb661fd9a8fddc41f80f6974b49d4a8bb8d17634"
	I1009 19:04:46.178539  303919 cri.go:89] found id: "09a19318421aec51d9c6d371040aa4863198795c9d616ed4df00c26edc18b036"
	I1009 19:04:46.178542  303919 cri.go:89] found id: "aaa0ded06ea4b321c4f9a079d4cf69d526ba351445ed008be4734d67b7ea8524"
	I1009 19:04:46.178545  303919 cri.go:89] found id: "804d5a04697a7b5835636d98ba88b94561d9699443a8eadbbe90fd28d0b160cb"
	I1009 19:04:46.178548  303919 cri.go:89] found id: ""
	I1009 19:04:46.178607  303919 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:04:46.199902  303919 out.go:203] 
	W1009 19:04:46.202945  303919 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:04:46.202973  303919 out.go:285] * 
	* 
	W1009 19:04:46.208093  303919 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:04:46.211063  303919 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-999657 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.32s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.54s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-999657 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-999657 apply -f testdata/storage-provisioner-rancher/pod.yaml
2025/10/09 19:04:39 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-999657 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [6d9b8b91-888d-4054-93d9-2bb57cabebec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [6d9b8b91-888d-4054-93d9-2bb57cabebec] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [6d9b8b91-888d-4054-93d9-2bb57cabebec] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002946598s
addons_test.go:967: (dbg) Run:  kubectl --context addons-999657 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 ssh "cat /opt/local-path-provisioner/pvc-043c2597-2dd6-45b6-98a9-80ebf890bc70_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-999657 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-999657 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-999657 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (270.858377ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:04:47.604550  304070 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:04:47.605490  304070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:47.605540  304070 out.go:374] Setting ErrFile to fd 2...
	I1009 19:04:47.605562  304070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:47.605886  304070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:04:47.606246  304070 mustload.go:65] Loading cluster: addons-999657
	I1009 19:04:47.606703  304070 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:47.606749  304070 addons.go:606] checking whether the cluster is paused
	I1009 19:04:47.606879  304070 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:47.606934  304070 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:04:47.607428  304070 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:04:47.625785  304070 ssh_runner.go:195] Run: systemctl --version
	I1009 19:04:47.625838  304070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:04:47.647633  304070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:04:47.752702  304070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:04:47.752807  304070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:04:47.784797  304070 cri.go:89] found id: "50e1747ecacea77a5f93e1d46aa99c4cb1fbce08f8f2f546154db1f9be02c796"
	I1009 19:04:47.784818  304070 cri.go:89] found id: "3f0053d1e02ad19ca3e128caaadebe9c4c0975ba7cb365a25e3c6d52870a17f0"
	I1009 19:04:47.784823  304070 cri.go:89] found id: "4e9a584f93742f2382e823963bed9b224f3ccba7c95660eb1dbca3c6b9908b3f"
	I1009 19:04:47.784832  304070 cri.go:89] found id: "f2087bf38944fc739afaac1113f222793a13154e659f060c4cdece3a7fa73071"
	I1009 19:04:47.784836  304070 cri.go:89] found id: "93ca74439d1e37925a704ede2fce384f9f9489c7b96ead6d63884128a4d9b0d1"
	I1009 19:04:47.784839  304070 cri.go:89] found id: "4011ef25cebccd0072dd10b711fedb8be54cb74589db33f0e4a5e667873eed44"
	I1009 19:04:47.784842  304070 cri.go:89] found id: "859a72eb5676e02ccdbfc8116afe2d9c4f2283fc97f6e130eb77ba45fe1f2ddf"
	I1009 19:04:47.784846  304070 cri.go:89] found id: "bb893c39a97db27f01be58b5eec66390173c64aa6dbf5fcc501e526bd34e4f74"
	I1009 19:04:47.784848  304070 cri.go:89] found id: "a9b5e7a178bf7423b4d23385f8409bd2da8f1ec9e312f7a1c786a7b9f1ec78fe"
	I1009 19:04:47.784855  304070 cri.go:89] found id: "cdcd01c9f8f4271bde354d676a0d7b97cf89b90bcce19fbab3de17f21aebb44c"
	I1009 19:04:47.784858  304070 cri.go:89] found id: "39a52fb8859c2040cedab3dbdc0662ae79f7d3abba463258d2c504bf8830448b"
	I1009 19:04:47.784861  304070 cri.go:89] found id: "7b7dc9732ce4b2127334e1e0c5b92a0ae3fbb0d316e98281fb8f8e8269c4b998"
	I1009 19:04:47.784864  304070 cri.go:89] found id: "ec4db71d717ddecd989934884e42ed0846d635333858d9d800181bfa0530c564"
	I1009 19:04:47.784867  304070 cri.go:89] found id: "f7e6d7b389c66f70de1f9dfa7a02589e922c296513ed0a3835867069f4fa9db8"
	I1009 19:04:47.784870  304070 cri.go:89] found id: "fbc396505d84e35ea37081e17222da9738c8ff9edd4bf2e014fdb1bf99f6de56"
	I1009 19:04:47.784874  304070 cri.go:89] found id: "c8fc026ca1019d8a0f4406f6cc4f8f68a03b36d38c792e26f09d7d78bf7ea9e3"
	I1009 19:04:47.784877  304070 cri.go:89] found id: "2823efa103e5ee38b792c909eaeee0c995e8a8302f5b0f522f6d786b3be0e7ba"
	I1009 19:04:47.784881  304070 cri.go:89] found id: "532259f4c5926820e3e18f689f80a1bc102631a6a0a05374223820ef91ec414f"
	I1009 19:04:47.784884  304070 cri.go:89] found id: "d85964586435683cdf29db9f8d8e0fd3637c91ff01ef302bb910a1397cf75b01"
	I1009 19:04:47.784887  304070 cri.go:89] found id: "7fcbf1be4bdef0161b5efca4fb661fd9a8fddc41f80f6974b49d4a8bb8d17634"
	I1009 19:04:47.784892  304070 cri.go:89] found id: "09a19318421aec51d9c6d371040aa4863198795c9d616ed4df00c26edc18b036"
	I1009 19:04:47.784895  304070 cri.go:89] found id: "aaa0ded06ea4b321c4f9a079d4cf69d526ba351445ed008be4734d67b7ea8524"
	I1009 19:04:47.784898  304070 cri.go:89] found id: "804d5a04697a7b5835636d98ba88b94561d9699443a8eadbbe90fd28d0b160cb"
	I1009 19:04:47.784901  304070 cri.go:89] found id: ""
	I1009 19:04:47.784956  304070 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:04:47.802583  304070 out.go:203] 
	W1009 19:04:47.806823  304070 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:04:47.806863  304070 out.go:285] * 
	* 
	W1009 19:04:47.812055  304070 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:04:47.816557  304070 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-999657 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.54s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-4lmwx" [2cd943cc-3d6e-418d-ab07-d6fe025ccc38] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003542927s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-999657 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (292.766606ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:04:39.045608  303624 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:04:39.046524  303624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:39.046541  303624 out.go:374] Setting ErrFile to fd 2...
	I1009 19:04:39.046548  303624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:39.046843  303624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:04:39.047179  303624 mustload.go:65] Loading cluster: addons-999657
	I1009 19:04:39.047604  303624 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:39.047626  303624 addons.go:606] checking whether the cluster is paused
	I1009 19:04:39.047764  303624 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:39.047783  303624 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:04:39.048275  303624 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:04:39.066465  303624 ssh_runner.go:195] Run: systemctl --version
	I1009 19:04:39.066529  303624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:04:39.083808  303624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:04:39.187849  303624 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:04:39.187951  303624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:04:39.236947  303624 cri.go:89] found id: "50e1747ecacea77a5f93e1d46aa99c4cb1fbce08f8f2f546154db1f9be02c796"
	I1009 19:04:39.236976  303624 cri.go:89] found id: "3f0053d1e02ad19ca3e128caaadebe9c4c0975ba7cb365a25e3c6d52870a17f0"
	I1009 19:04:39.236982  303624 cri.go:89] found id: "4e9a584f93742f2382e823963bed9b224f3ccba7c95660eb1dbca3c6b9908b3f"
	I1009 19:04:39.236986  303624 cri.go:89] found id: "f2087bf38944fc739afaac1113f222793a13154e659f060c4cdece3a7fa73071"
	I1009 19:04:39.236989  303624 cri.go:89] found id: "93ca74439d1e37925a704ede2fce384f9f9489c7b96ead6d63884128a4d9b0d1"
	I1009 19:04:39.236993  303624 cri.go:89] found id: "4011ef25cebccd0072dd10b711fedb8be54cb74589db33f0e4a5e667873eed44"
	I1009 19:04:39.236996  303624 cri.go:89] found id: "859a72eb5676e02ccdbfc8116afe2d9c4f2283fc97f6e130eb77ba45fe1f2ddf"
	I1009 19:04:39.236999  303624 cri.go:89] found id: "bb893c39a97db27f01be58b5eec66390173c64aa6dbf5fcc501e526bd34e4f74"
	I1009 19:04:39.237002  303624 cri.go:89] found id: "a9b5e7a178bf7423b4d23385f8409bd2da8f1ec9e312f7a1c786a7b9f1ec78fe"
	I1009 19:04:39.237007  303624 cri.go:89] found id: "cdcd01c9f8f4271bde354d676a0d7b97cf89b90bcce19fbab3de17f21aebb44c"
	I1009 19:04:39.237011  303624 cri.go:89] found id: "39a52fb8859c2040cedab3dbdc0662ae79f7d3abba463258d2c504bf8830448b"
	I1009 19:04:39.237014  303624 cri.go:89] found id: "7b7dc9732ce4b2127334e1e0c5b92a0ae3fbb0d316e98281fb8f8e8269c4b998"
	I1009 19:04:39.237017  303624 cri.go:89] found id: "ec4db71d717ddecd989934884e42ed0846d635333858d9d800181bfa0530c564"
	I1009 19:04:39.237020  303624 cri.go:89] found id: "f7e6d7b389c66f70de1f9dfa7a02589e922c296513ed0a3835867069f4fa9db8"
	I1009 19:04:39.237023  303624 cri.go:89] found id: "fbc396505d84e35ea37081e17222da9738c8ff9edd4bf2e014fdb1bf99f6de56"
	I1009 19:04:39.237028  303624 cri.go:89] found id: "c8fc026ca1019d8a0f4406f6cc4f8f68a03b36d38c792e26f09d7d78bf7ea9e3"
	I1009 19:04:39.237036  303624 cri.go:89] found id: "2823efa103e5ee38b792c909eaeee0c995e8a8302f5b0f522f6d786b3be0e7ba"
	I1009 19:04:39.237040  303624 cri.go:89] found id: "532259f4c5926820e3e18f689f80a1bc102631a6a0a05374223820ef91ec414f"
	I1009 19:04:39.237044  303624 cri.go:89] found id: "d85964586435683cdf29db9f8d8e0fd3637c91ff01ef302bb910a1397cf75b01"
	I1009 19:04:39.237047  303624 cri.go:89] found id: "7fcbf1be4bdef0161b5efca4fb661fd9a8fddc41f80f6974b49d4a8bb8d17634"
	I1009 19:04:39.237051  303624 cri.go:89] found id: "09a19318421aec51d9c6d371040aa4863198795c9d616ed4df00c26edc18b036"
	I1009 19:04:39.237057  303624 cri.go:89] found id: "aaa0ded06ea4b321c4f9a079d4cf69d526ba351445ed008be4734d67b7ea8524"
	I1009 19:04:39.237060  303624 cri.go:89] found id: "804d5a04697a7b5835636d98ba88b94561d9699443a8eadbbe90fd28d0b160cb"
	I1009 19:04:39.237063  303624 cri.go:89] found id: ""
	I1009 19:04:39.237198  303624 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:04:39.270180  303624 out.go:203] 
	W1009 19:04:39.273077  303624 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:04:39.273101  303624 out.go:285] * 
	* 
	W1009 19:04:39.278190  303624 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:04:39.281456  303624 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-999657 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.30s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-vn427" [2d32e61c-a566-4933-a2f0-fea73eb5dd64] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003748757s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-999657 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-999657 addons disable yakd --alsologtostderr -v=1: exit status 11 (300.623954ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:04:32.765662  303510 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:04:32.767123  303510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:32.767184  303510 out.go:374] Setting ErrFile to fd 2...
	I1009 19:04:32.767206  303510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:04:32.767546  303510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:04:32.767915  303510 mustload.go:65] Loading cluster: addons-999657
	I1009 19:04:32.768351  303510 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:32.768392  303510 addons.go:606] checking whether the cluster is paused
	I1009 19:04:32.768539  303510 config.go:182] Loaded profile config "addons-999657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:04:32.768571  303510 host.go:66] Checking if "addons-999657" exists ...
	I1009 19:04:32.769169  303510 cli_runner.go:164] Run: docker container inspect addons-999657 --format={{.State.Status}}
	I1009 19:04:32.798029  303510 ssh_runner.go:195] Run: systemctl --version
	I1009 19:04:32.798090  303510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-999657
	I1009 19:04:32.815819  303510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/addons-999657/id_rsa Username:docker}
	I1009 19:04:32.919962  303510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:04:32.920041  303510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:04:32.956928  303510 cri.go:89] found id: "50e1747ecacea77a5f93e1d46aa99c4cb1fbce08f8f2f546154db1f9be02c796"
	I1009 19:04:32.956952  303510 cri.go:89] found id: "3f0053d1e02ad19ca3e128caaadebe9c4c0975ba7cb365a25e3c6d52870a17f0"
	I1009 19:04:32.956958  303510 cri.go:89] found id: "4e9a584f93742f2382e823963bed9b224f3ccba7c95660eb1dbca3c6b9908b3f"
	I1009 19:04:32.956962  303510 cri.go:89] found id: "f2087bf38944fc739afaac1113f222793a13154e659f060c4cdece3a7fa73071"
	I1009 19:04:32.956965  303510 cri.go:89] found id: "93ca74439d1e37925a704ede2fce384f9f9489c7b96ead6d63884128a4d9b0d1"
	I1009 19:04:32.956968  303510 cri.go:89] found id: "4011ef25cebccd0072dd10b711fedb8be54cb74589db33f0e4a5e667873eed44"
	I1009 19:04:32.956971  303510 cri.go:89] found id: "859a72eb5676e02ccdbfc8116afe2d9c4f2283fc97f6e130eb77ba45fe1f2ddf"
	I1009 19:04:32.956974  303510 cri.go:89] found id: "bb893c39a97db27f01be58b5eec66390173c64aa6dbf5fcc501e526bd34e4f74"
	I1009 19:04:32.956977  303510 cri.go:89] found id: "a9b5e7a178bf7423b4d23385f8409bd2da8f1ec9e312f7a1c786a7b9f1ec78fe"
	I1009 19:04:32.956983  303510 cri.go:89] found id: "cdcd01c9f8f4271bde354d676a0d7b97cf89b90bcce19fbab3de17f21aebb44c"
	I1009 19:04:32.956987  303510 cri.go:89] found id: "39a52fb8859c2040cedab3dbdc0662ae79f7d3abba463258d2c504bf8830448b"
	I1009 19:04:32.956990  303510 cri.go:89] found id: "7b7dc9732ce4b2127334e1e0c5b92a0ae3fbb0d316e98281fb8f8e8269c4b998"
	I1009 19:04:32.956993  303510 cri.go:89] found id: "ec4db71d717ddecd989934884e42ed0846d635333858d9d800181bfa0530c564"
	I1009 19:04:32.956997  303510 cri.go:89] found id: "f7e6d7b389c66f70de1f9dfa7a02589e922c296513ed0a3835867069f4fa9db8"
	I1009 19:04:32.957000  303510 cri.go:89] found id: "fbc396505d84e35ea37081e17222da9738c8ff9edd4bf2e014fdb1bf99f6de56"
	I1009 19:04:32.957005  303510 cri.go:89] found id: "c8fc026ca1019d8a0f4406f6cc4f8f68a03b36d38c792e26f09d7d78bf7ea9e3"
	I1009 19:04:32.957021  303510 cri.go:89] found id: "2823efa103e5ee38b792c909eaeee0c995e8a8302f5b0f522f6d786b3be0e7ba"
	I1009 19:04:32.957027  303510 cri.go:89] found id: "532259f4c5926820e3e18f689f80a1bc102631a6a0a05374223820ef91ec414f"
	I1009 19:04:32.957030  303510 cri.go:89] found id: "d85964586435683cdf29db9f8d8e0fd3637c91ff01ef302bb910a1397cf75b01"
	I1009 19:04:32.957039  303510 cri.go:89] found id: "7fcbf1be4bdef0161b5efca4fb661fd9a8fddc41f80f6974b49d4a8bb8d17634"
	I1009 19:04:32.957047  303510 cri.go:89] found id: "09a19318421aec51d9c6d371040aa4863198795c9d616ed4df00c26edc18b036"
	I1009 19:04:32.957051  303510 cri.go:89] found id: "aaa0ded06ea4b321c4f9a079d4cf69d526ba351445ed008be4734d67b7ea8524"
	I1009 19:04:32.957054  303510 cri.go:89] found id: "804d5a04697a7b5835636d98ba88b94561d9699443a8eadbbe90fd28d0b160cb"
	I1009 19:04:32.957057  303510 cri.go:89] found id: ""
	I1009 19:04:32.957138  303510 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 19:04:32.972701  303510 out.go:203] 
	W1009 19:04:32.975562  303510 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:04:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 19:04:32.975610  303510 out.go:285] * 
	* 
	W1009 19:04:32.980635  303510 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:04:32.983435  303510 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-999657 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.31s)

                                                
                                    
x
+
TestForceSystemdFlag (515.98s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-736218 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-736218 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m32.175010293s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-736218] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-736218" primary control-plane node in "force-systemd-flag-736218" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 20:03:44.544032  459237 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:03:44.544274  459237 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:03:44.544303  459237 out.go:374] Setting ErrFile to fd 2...
	I1009 20:03:44.544326  459237 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:03:44.544617  459237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:03:44.545095  459237 out.go:368] Setting JSON to false
	I1009 20:03:44.546097  459237 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9964,"bootTime":1760030261,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:03:44.546188  459237 start.go:143] virtualization:  
	I1009 20:03:44.552552  459237 out.go:179] * [force-systemd-flag-736218] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:03:44.556164  459237 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:03:44.556242  459237 notify.go:221] Checking for updates...
	I1009 20:03:44.563328  459237 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:03:44.566674  459237 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:03:44.569775  459237 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:03:44.572947  459237 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:03:44.575991  459237 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:03:44.579638  459237 config.go:182] Loaded profile config "kubernetes-upgrade-164946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:03:44.579745  459237 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:03:44.618500  459237 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:03:44.618633  459237 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:03:44.706730  459237 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:03:44.69501288 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:03:44.706843  459237 docker.go:319] overlay module found
	I1009 20:03:44.710161  459237 out.go:179] * Using the docker driver based on user configuration
	I1009 20:03:44.713082  459237 start.go:309] selected driver: docker
	I1009 20:03:44.713102  459237 start.go:930] validating driver "docker" against <nil>
	I1009 20:03:44.713254  459237 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:03:44.713974  459237 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:03:44.796514  459237 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:03:44.780313105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:03:44.796667  459237 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 20:03:44.796909  459237 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 20:03:44.799892  459237 out.go:179] * Using Docker driver with root privileges
	I1009 20:03:44.802712  459237 cni.go:84] Creating CNI manager for ""
	I1009 20:03:44.802794  459237 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:03:44.802811  459237 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 20:03:44.802903  459237 start.go:353] cluster config:
	{Name:force-systemd-flag-736218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-736218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:03:44.806015  459237 out.go:179] * Starting "force-systemd-flag-736218" primary control-plane node in "force-systemd-flag-736218" cluster
	I1009 20:03:44.809292  459237 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:03:44.812208  459237 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:03:44.815106  459237 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:03:44.815172  459237 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 20:03:44.815197  459237 cache.go:58] Caching tarball of preloaded images
	I1009 20:03:44.815300  459237 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 20:03:44.815316  459237 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 20:03:44.815419  459237 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/config.json ...
	I1009 20:03:44.815444  459237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/config.json: {Name:mke7eb96a32938a95db524c8a3e8392af02ab225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:03:44.815602  459237 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:03:44.836896  459237 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:03:44.836922  459237 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:03:44.836935  459237 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:03:44.836958  459237 start.go:361] acquireMachinesLock for force-systemd-flag-736218: {Name:mk7db78b279c94d7ad7ce7665f9e196b8b747e91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:03:44.837084  459237 start.go:365] duration metric: took 105.765µs to acquireMachinesLock for "force-systemd-flag-736218"
	I1009 20:03:44.837160  459237 start.go:94] Provisioning new machine with config: &{Name:force-systemd-flag-736218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-736218 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:03:44.837239  459237 start.go:126] createHost starting for "" (driver="docker")
	I1009 20:03:44.840534  459237 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 20:03:44.840772  459237 start.go:160] libmachine.API.Create for "force-systemd-flag-736218" (driver="docker")
	I1009 20:03:44.840812  459237 client.go:168] LocalClient.Create starting
	I1009 20:03:44.840889  459237 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem
	I1009 20:03:44.840928  459237 main.go:141] libmachine: Decoding PEM data...
	I1009 20:03:44.840948  459237 main.go:141] libmachine: Parsing certificate...
	I1009 20:03:44.841005  459237 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem
	I1009 20:03:44.841028  459237 main.go:141] libmachine: Decoding PEM data...
	I1009 20:03:44.841043  459237 main.go:141] libmachine: Parsing certificate...
	I1009 20:03:44.841471  459237 cli_runner.go:164] Run: docker network inspect force-systemd-flag-736218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 20:03:44.862927  459237 cli_runner.go:211] docker network inspect force-systemd-flag-736218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 20:03:44.863016  459237 network_create.go:284] running [docker network inspect force-systemd-flag-736218] to gather additional debugging logs...
	I1009 20:03:44.863038  459237 cli_runner.go:164] Run: docker network inspect force-systemd-flag-736218
	W1009 20:03:44.883152  459237 cli_runner.go:211] docker network inspect force-systemd-flag-736218 returned with exit code 1
	I1009 20:03:44.883179  459237 network_create.go:287] error running [docker network inspect force-systemd-flag-736218]: docker network inspect force-systemd-flag-736218: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-736218 not found
	I1009 20:03:44.883193  459237 network_create.go:289] output of [docker network inspect force-systemd-flag-736218]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-736218 not found
	
	** /stderr **
	I1009 20:03:44.883302  459237 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:03:44.901820  459237 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3847a6577684 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:b5:e6:7d:c7:ad} reservation:<nil>}
	I1009 20:03:44.902180  459237 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5742e12e0dad IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9e:82:91:fd:a6:fb} reservation:<nil>}
	I1009 20:03:44.902426  459237 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-11b099636187 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:bb:e5:1b:6d:a2} reservation:<nil>}
	I1009 20:03:44.902663  459237 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-81d7de03c6b7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0e:2e:fb:2c:1a:98} reservation:<nil>}
	I1009 20:03:44.903085  459237 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019df360}
	I1009 20:03:44.903102  459237 network_create.go:124] attempt to create docker network force-systemd-flag-736218 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1009 20:03:44.903163  459237 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-736218 force-systemd-flag-736218
	I1009 20:03:44.971370  459237 network_create.go:108] docker network force-systemd-flag-736218 192.168.85.0/24 created
	I1009 20:03:44.971401  459237 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-736218" container
	I1009 20:03:44.971489  459237 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 20:03:44.992963  459237 cli_runner.go:164] Run: docker volume create force-systemd-flag-736218 --label name.minikube.sigs.k8s.io=force-systemd-flag-736218 --label created_by.minikube.sigs.k8s.io=true
	I1009 20:03:45.011962  459237 oci.go:103] Successfully created a docker volume force-systemd-flag-736218
	I1009 20:03:45.012063  459237 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-736218-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-736218 --entrypoint /usr/bin/test -v force-systemd-flag-736218:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 20:03:45.811176  459237 oci.go:107] Successfully prepared a docker volume force-systemd-flag-736218
	I1009 20:03:45.811218  459237 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:03:45.811237  459237 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 20:03:45.811318  459237 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-736218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 20:03:50.893516  459237 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-736218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (5.082161789s)
	I1009 20:03:50.893561  459237 kic.go:203] duration metric: took 5.082306808s to extract preloaded images to volume ...
	W1009 20:03:50.893696  459237 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 20:03:50.893803  459237 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 20:03:50.987431  459237 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-736218 --name force-systemd-flag-736218 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-736218 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-736218 --network force-systemd-flag-736218 --ip 192.168.85.2 --volume force-systemd-flag-736218:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 20:03:51.415536  459237 cli_runner.go:164] Run: docker container inspect force-systemd-flag-736218 --format={{.State.Running}}
	I1009 20:03:51.442249  459237 cli_runner.go:164] Run: docker container inspect force-systemd-flag-736218 --format={{.State.Status}}
	I1009 20:03:51.473293  459237 cli_runner.go:164] Run: docker exec force-systemd-flag-736218 stat /var/lib/dpkg/alternatives/iptables
	I1009 20:03:51.535703  459237 oci.go:144] the created container "force-systemd-flag-736218" has a running status.
	I1009 20:03:51.535740  459237 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-flag-736218/id_rsa...
	I1009 20:03:51.630008  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-flag-736218/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 20:03:51.631734  459237 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-flag-736218/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 20:03:51.657871  459237 cli_runner.go:164] Run: docker container inspect force-systemd-flag-736218 --format={{.State.Status}}
	I1009 20:03:51.687066  459237 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 20:03:51.687086  459237 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-736218 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 20:03:51.758827  459237 cli_runner.go:164] Run: docker container inspect force-systemd-flag-736218 --format={{.State.Status}}
	I1009 20:03:51.793894  459237 machine.go:93] provisionDockerMachine start ...
	I1009 20:03:51.793999  459237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-736218
	I1009 20:03:51.829561  459237 main.go:141] libmachine: Using SSH client type: native
	I1009 20:03:51.829932  459237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1009 20:03:51.829943  459237 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:03:51.830634  459237 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53826->127.0.0.1:33396: read: connection reset by peer
	I1009 20:03:55.019835  459237 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-736218
	
	I1009 20:03:55.019918  459237 ubuntu.go:182] provisioning hostname "force-systemd-flag-736218"
	I1009 20:03:55.020016  459237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-736218
	I1009 20:03:55.046855  459237 main.go:141] libmachine: Using SSH client type: native
	I1009 20:03:55.047181  459237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1009 20:03:55.047200  459237 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-736218 && echo "force-systemd-flag-736218" | sudo tee /etc/hostname
	I1009 20:03:55.230175  459237 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-736218
	
	I1009 20:03:55.230322  459237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-736218
	I1009 20:03:55.255528  459237 main.go:141] libmachine: Using SSH client type: native
	I1009 20:03:55.255857  459237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1009 20:03:55.255877  459237 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-736218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-736218/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-736218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:03:55.441787  459237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:03:55.441813  459237 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:03:55.441842  459237 ubuntu.go:190] setting up certificates
	I1009 20:03:55.441852  459237 provision.go:84] configureAuth start
	I1009 20:03:55.441917  459237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-736218
	I1009 20:03:55.471341  459237 provision.go:143] copyHostCerts
	I1009 20:03:55.471388  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:03:55.471420  459237 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:03:55.471430  459237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:03:55.471507  459237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:03:55.471655  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:03:55.471686  459237 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:03:55.471692  459237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:03:55.471725  459237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:03:55.471813  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:03:55.471837  459237 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:03:55.471843  459237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:03:55.471876  459237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:03:55.471937  459237 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-736218 san=[127.0.0.1 192.168.85.2 force-systemd-flag-736218 localhost minikube]
	I1009 20:03:56.030087  459237 provision.go:177] copyRemoteCerts
	I1009 20:03:56.030179  459237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:03:56.030259  459237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-736218
	I1009 20:03:56.054998  459237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-flag-736218/id_rsa Username:docker}
	I1009 20:03:56.169804  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 20:03:56.169883  459237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:03:56.190443  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 20:03:56.190525  459237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1009 20:03:56.210912  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 20:03:56.210979  459237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:03:56.231668  459237 provision.go:87] duration metric: took 789.789278ms to configureAuth
	I1009 20:03:56.231697  459237 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:03:56.231951  459237 config.go:182] Loaded profile config "force-systemd-flag-736218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:03:56.232120  459237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-736218
	I1009 20:03:56.257772  459237 main.go:141] libmachine: Using SSH client type: native
	I1009 20:03:56.258096  459237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33396 <nil> <nil>}
	I1009 20:03:56.258122  459237 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:03:56.675415  459237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:03:56.675451  459237 machine.go:96] duration metric: took 4.881536853s to provisionDockerMachine
	I1009 20:03:56.675469  459237 client.go:171] duration metric: took 11.834641136s to LocalClient.Create
	I1009 20:03:56.675484  459237 start.go:168] duration metric: took 11.834714474s to libmachine.API.Create "force-systemd-flag-736218"
	I1009 20:03:56.675491  459237 start.go:294] postStartSetup for "force-systemd-flag-736218" (driver="docker")
	I1009 20:03:56.675502  459237 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:03:56.675578  459237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:03:56.675622  459237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-736218
	I1009 20:03:56.703040  459237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-flag-736218/id_rsa Username:docker}
	I1009 20:03:56.825956  459237 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:03:56.831065  459237 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:03:56.831141  459237 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:03:56.831156  459237 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:03:56.831207  459237 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:03:56.831283  459237 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:03:56.831290  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /etc/ssl/certs/2960022.pem
	I1009 20:03:56.831386  459237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:03:56.844920  459237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:03:56.878507  459237 start.go:297] duration metric: took 203.000047ms for postStartSetup
	I1009 20:03:56.878985  459237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-736218
	I1009 20:03:56.908895  459237 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/config.json ...
	I1009 20:03:56.909212  459237 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:03:56.909262  459237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-736218
	I1009 20:03:56.937721  459237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-flag-736218/id_rsa Username:docker}
	I1009 20:03:57.058608  459237 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:03:57.069496  459237 start.go:129] duration metric: took 12.232239439s to createHost
	I1009 20:03:57.069561  459237 start.go:84] releasing machines lock for "force-systemd-flag-736218", held for 12.232461972s
	I1009 20:03:57.069667  459237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-736218
	I1009 20:03:57.088900  459237 ssh_runner.go:195] Run: cat /version.json
	I1009 20:03:57.088963  459237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-736218
	I1009 20:03:57.089174  459237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:03:57.089237  459237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-736218
	I1009 20:03:57.120628  459237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-flag-736218/id_rsa Username:docker}
	I1009 20:03:57.133978  459237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33396 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-flag-736218/id_rsa Username:docker}
	I1009 20:03:57.244903  459237 ssh_runner.go:195] Run: systemctl --version
	I1009 20:03:57.352318  459237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:03:57.408295  459237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:03:57.413248  459237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:03:57.413375  459237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:03:57.444306  459237 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 20:03:57.444332  459237 start.go:496] detecting cgroup driver to use...
	I1009 20:03:57.444345  459237 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1009 20:03:57.444425  459237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:03:57.467836  459237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:03:57.485150  459237 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:03:57.485258  459237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:03:57.508020  459237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:03:57.537782  459237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:03:57.745742  459237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:03:57.975557  459237 docker.go:234] disabling docker service ...
	I1009 20:03:57.975636  459237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:03:58.015417  459237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:03:58.034188  459237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:03:58.257228  459237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:03:58.480832  459237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:03:58.506704  459237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:03:58.525181  459237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:03:58.525249  459237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:58.536274  459237 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 20:03:58.536340  459237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:58.548468  459237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:58.568620  459237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:58.582265  459237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:03:58.596522  459237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:58.608646  459237 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:58.625514  459237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:58.644057  459237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:03:58.656478  459237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:03:58.666174  459237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:03:58.810765  459237 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:03:58.981554  459237 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:03:58.981628  459237 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:03:58.986441  459237 start.go:564] Will wait 60s for crictl version
	I1009 20:03:58.986506  459237 ssh_runner.go:195] Run: which crictl
	I1009 20:03:58.991141  459237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:03:59.035032  459237 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:03:59.035123  459237 ssh_runner.go:195] Run: crio --version
	I1009 20:03:59.074742  459237 ssh_runner.go:195] Run: crio --version
	I1009 20:03:59.122812  459237 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 20:03:59.125533  459237 cli_runner.go:164] Run: docker network inspect force-systemd-flag-736218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:03:59.142710  459237 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 20:03:59.146799  459237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:03:59.156920  459237 kubeadm.go:883] updating cluster {Name:force-systemd-flag-736218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-736218 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:03:59.157030  459237 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:03:59.157089  459237 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:03:59.210846  459237 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:03:59.210870  459237 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:03:59.210928  459237 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:03:59.241675  459237 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:03:59.241697  459237 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:03:59.241704  459237 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1009 20:03:59.241796  459237 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-736218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-736218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:03:59.241892  459237 ssh_runner.go:195] Run: crio config
	I1009 20:03:59.318288  459237 cni.go:84] Creating CNI manager for ""
	I1009 20:03:59.318365  459237 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:03:59.318405  459237 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 20:03:59.318463  459237 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-736218 NodeName:force-systemd-flag-736218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:03:59.318637  459237 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-736218"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:03:59.318744  459237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:03:59.327303  459237 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:03:59.327420  459237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:03:59.335827  459237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1009 20:03:59.355401  459237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:03:59.379612  459237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1009 20:03:59.394106  459237 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:03:59.398541  459237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:03:59.413721  459237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:03:59.631310  459237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:03:59.658179  459237 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218 for IP: 192.168.85.2
	I1009 20:03:59.658199  459237 certs.go:195] generating shared ca certs ...
	I1009 20:03:59.658216  459237 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:03:59.658358  459237 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:03:59.658395  459237 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:03:59.658402  459237 certs.go:257] generating profile certs ...
	I1009 20:03:59.658457  459237 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/client.key
	I1009 20:03:59.658478  459237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/client.crt with IP's: []
	I1009 20:03:59.991621  459237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/client.crt ...
	I1009 20:03:59.991656  459237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/client.crt: {Name:mk7f4f897324e226c6f2c0e55eca3844ceefb198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:03:59.991913  459237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/client.key ...
	I1009 20:03:59.991933  459237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/client.key: {Name:mkceda78b976206039ad9b785c3f861a64c18ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:03:59.992074  459237 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/apiserver.key.acbb1824
	I1009 20:03:59.992107  459237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/apiserver.crt.acbb1824 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1009 20:04:00.476093  459237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/apiserver.crt.acbb1824 ...
	I1009 20:04:00.476130  459237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/apiserver.crt.acbb1824: {Name:mkc2809e2fc290dbd59f7da7852ecc0e0f281805 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:04:00.476372  459237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/apiserver.key.acbb1824 ...
	I1009 20:04:00.476383  459237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/apiserver.key.acbb1824: {Name:mkd049d40cac44daa45672612025e8fedfed53a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:04:00.476461  459237 certs.go:382] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/apiserver.crt.acbb1824 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/apiserver.crt
	I1009 20:04:00.476559  459237 certs.go:386] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/apiserver.key.acbb1824 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/apiserver.key
	I1009 20:04:00.476649  459237 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/proxy-client.key
	I1009 20:04:00.476668  459237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/proxy-client.crt with IP's: []
	I1009 20:04:00.908182  459237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/proxy-client.crt ...
	I1009 20:04:00.908218  459237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/proxy-client.crt: {Name:mk1ec2405bb18e95fb4ea6405609b1d8051b6e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:04:00.908392  459237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/proxy-client.key ...
	I1009 20:04:00.908409  459237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/proxy-client.key: {Name:mkfad09577bd6bccc8ad5a0642d245b00332fe63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:04:00.908490  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 20:04:00.908516  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 20:04:00.908537  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 20:04:00.908557  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 20:04:00.908570  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 20:04:00.908587  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 20:04:00.908599  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 20:04:00.908614  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 20:04:00.908667  459237 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:04:00.908705  459237 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:04:00.908717  459237 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:04:00.908746  459237 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:04:00.908773  459237 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:04:00.908813  459237 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:04:00.908861  459237 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:04:00.908892  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:04:00.908912  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem -> /usr/share/ca-certificates/296002.pem
	I1009 20:04:00.908929  459237 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /usr/share/ca-certificates/2960022.pem
	I1009 20:04:00.909578  459237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:04:00.934735  459237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:04:00.955447  459237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:04:00.996472  459237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:04:01.020426  459237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1009 20:04:01.041472  459237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:04:01.064075  459237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:04:01.084782  459237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-flag-736218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:04:01.104271  459237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:04:01.132591  459237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:04:01.159343  459237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:04:01.187050  459237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:04:01.202132  459237 ssh_runner.go:195] Run: openssl version
	I1009 20:04:01.210295  459237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:04:01.220213  459237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:04:01.225244  459237 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:04:01.225368  459237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:04:01.268143  459237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:04:01.277664  459237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:04:01.286867  459237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:04:01.293042  459237 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:04:01.293226  459237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:04:01.348753  459237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:04:01.363699  459237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:04:01.381024  459237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:04:01.385762  459237 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:04:01.385832  459237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:04:01.434328  459237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:04:01.447150  459237 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:04:01.451288  459237 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 20:04:01.451354  459237 kubeadm.go:400] StartCluster: {Name:force-systemd-flag-736218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-736218 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:04:01.451434  459237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:04:01.451495  459237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:04:01.506804  459237 cri.go:89] found id: ""
	I1009 20:04:01.506882  459237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:04:01.521041  459237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:04:01.529722  459237 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 20:04:01.529807  459237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:04:01.540527  459237 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:04:01.540544  459237 kubeadm.go:157] found existing configuration files:
	
	I1009 20:04:01.540599  459237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:04:01.550546  459237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:04:01.550617  459237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:04:01.558676  459237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:04:01.572825  459237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:04:01.572898  459237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:04:01.589915  459237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:04:01.605311  459237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:04:01.605380  459237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:04:01.615041  459237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:04:01.625322  459237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:04:01.625400  459237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:04:01.636286  459237 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 20:04:01.715177  459237 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 20:04:01.715979  459237 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 20:04:01.747618  459237 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 20:04:01.747695  459237 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 20:04:01.747737  459237 kubeadm.go:318] OS: Linux
	I1009 20:04:01.747799  459237 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 20:04:01.747856  459237 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 20:04:01.747909  459237 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 20:04:01.747965  459237 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 20:04:01.748018  459237 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 20:04:01.748073  459237 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 20:04:01.748123  459237 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 20:04:01.748177  459237 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 20:04:01.748232  459237 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 20:04:01.866294  459237 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:04:01.866414  459237 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:04:01.866515  459237 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:04:01.878931  459237 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:04:01.885413  459237 out.go:252]   - Generating certificates and keys ...
	I1009 20:04:01.885515  459237 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 20:04:01.885592  459237 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 20:04:02.722939  459237 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 20:04:02.881077  459237 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 20:04:03.706185  459237 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 20:04:04.604167  459237 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 20:04:05.477509  459237 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 20:04:05.477660  459237 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-736218 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 20:04:05.631230  459237 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 20:04:05.631790  459237 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-736218 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 20:04:06.110240  459237 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 20:04:06.551184  459237 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 20:04:06.856925  459237 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 20:04:06.857192  459237 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:04:07.699936  459237 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:04:08.854020  459237 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:04:09.833811  459237 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:04:10.397918  459237 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:04:10.738991  459237 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:04:10.740179  459237 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:04:10.743260  459237 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:04:10.746755  459237 out.go:252]   - Booting up control plane ...
	I1009 20:04:10.746867  459237 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:04:10.746949  459237 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:04:10.748232  459237 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:04:10.769313  459237 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:04:10.769686  459237 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 20:04:10.777466  459237 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 20:04:10.777578  459237 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:04:10.777623  459237 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 20:04:10.913570  459237 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:04:10.913700  459237 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:04:11.909477  459237 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001080756s
	I1009 20:04:11.916529  459237 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 20:04:11.916652  459237 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1009 20:04:11.916746  459237 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 20:04:11.916860  459237 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 20:08:11.916796  459237 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00016937s
	I1009 20:08:11.917050  459237 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000401044s
	I1009 20:08:11.917630  459237 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000599289s
	I1009 20:08:11.917649  459237 kubeadm.go:318] 
	I1009 20:08:11.917754  459237 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 20:08:11.917851  459237 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:08:11.917950  459237 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 20:08:11.918058  459237 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:08:11.918145  459237 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 20:08:11.918229  459237 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:08:11.918237  459237 kubeadm.go:318] 
	I1009 20:08:11.923392  459237 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 20:08:11.923630  459237 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 20:08:11.923747  459237 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:08:11.924330  459237 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 20:08:11.924425  459237 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 20:08:11.924578  459237 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-736218 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-736218 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001080756s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.00016937s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000401044s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000599289s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-736218 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-736218 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001080756s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.00016937s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000401044s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000599289s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 20:08:11.924660  459237 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:08:12.457628  459237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:08:12.471972  459237 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 20:08:12.472037  459237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:08:12.480269  459237 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:08:12.480292  459237 kubeadm.go:157] found existing configuration files:
	
	I1009 20:08:12.480344  459237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:08:12.488611  459237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:08:12.488677  459237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:08:12.496453  459237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:08:12.505181  459237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:08:12.505303  459237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:08:12.513042  459237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:08:12.520935  459237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:08:12.521004  459237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:08:12.528868  459237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:08:12.536615  459237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:08:12.536745  459237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:08:12.544163  459237 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 20:08:12.608800  459237 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 20:08:12.609040  459237 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 20:08:12.679432  459237 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:12:16.118091  459237 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused]
	I1009 20:12:16.118199  459237 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:12:16.121988  459237 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 20:12:16.122071  459237 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 20:12:16.122173  459237 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 20:12:16.122236  459237 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 20:12:16.122279  459237 kubeadm.go:318] OS: Linux
	I1009 20:12:16.122330  459237 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 20:12:16.122387  459237 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 20:12:16.122445  459237 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 20:12:16.122500  459237 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 20:12:16.122555  459237 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 20:12:16.122613  459237 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 20:12:16.122672  459237 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 20:12:16.122726  459237 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 20:12:16.122777  459237 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 20:12:16.122856  459237 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:12:16.122958  459237 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:12:16.123054  459237 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:12:16.123129  459237 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:12:16.126417  459237 out.go:252]   - Generating certificates and keys ...
	I1009 20:12:16.126534  459237 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 20:12:16.126631  459237 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 20:12:16.126763  459237 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:12:16.126829  459237 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:12:16.126911  459237 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:12:16.126986  459237 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 20:12:16.127072  459237 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:12:16.127155  459237 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:12:16.127239  459237 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:12:16.127317  459237 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:12:16.127361  459237 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 20:12:16.127422  459237 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:12:16.127477  459237 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:12:16.127538  459237 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:12:16.127596  459237 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:12:16.127661  459237 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:12:16.127715  459237 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:12:16.127802  459237 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:12:16.127871  459237 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:12:16.130895  459237 out.go:252]   - Booting up control plane ...
	I1009 20:12:16.131019  459237 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:12:16.131109  459237 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:12:16.131188  459237 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:12:16.131303  459237 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:12:16.131407  459237 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 20:12:16.131520  459237 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 20:12:16.131612  459237 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:12:16.131654  459237 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 20:12:16.131791  459237 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:12:16.131900  459237 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:12:16.131961  459237 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.008580101s
	I1009 20:12:16.132079  459237 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 20:12:16.132222  459237 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1009 20:12:16.132319  459237 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 20:12:16.132402  459237 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 20:12:16.132478  459237 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000392632s
	I1009 20:12:16.132564  459237 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000121591s
	I1009 20:12:16.132641  459237 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000389745s
	I1009 20:12:16.132646  459237 kubeadm.go:318] 
	I1009 20:12:16.132741  459237 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 20:12:16.132827  459237 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:12:16.132920  459237 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 20:12:16.133018  459237 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:12:16.133096  459237 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 20:12:16.133378  459237 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:12:16.133422  459237 kubeadm.go:318] 
	I1009 20:12:16.133477  459237 kubeadm.go:402] duration metric: took 8m14.682125173s to StartCluster
	I1009 20:12:16.133531  459237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:12:16.133607  459237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:12:16.161554  459237 cri.go:89] found id: ""
	I1009 20:12:16.161588  459237 logs.go:282] 0 containers: []
	W1009 20:12:16.161597  459237 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:12:16.161605  459237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:12:16.161671  459237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:12:16.190207  459237 cri.go:89] found id: ""
	I1009 20:12:16.190238  459237 logs.go:282] 0 containers: []
	W1009 20:12:16.190247  459237 logs.go:284] No container was found matching "etcd"
	I1009 20:12:16.190253  459237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:12:16.190317  459237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:12:16.220534  459237 cri.go:89] found id: ""
	I1009 20:12:16.220558  459237 logs.go:282] 0 containers: []
	W1009 20:12:16.220567  459237 logs.go:284] No container was found matching "coredns"
	I1009 20:12:16.220574  459237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:12:16.220635  459237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:12:16.247774  459237 cri.go:89] found id: ""
	I1009 20:12:16.247799  459237 logs.go:282] 0 containers: []
	W1009 20:12:16.247808  459237 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:12:16.247815  459237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:12:16.247877  459237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:12:16.275360  459237 cri.go:89] found id: ""
	I1009 20:12:16.275385  459237 logs.go:282] 0 containers: []
	W1009 20:12:16.275394  459237 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:12:16.275401  459237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:12:16.275466  459237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:12:16.308069  459237 cri.go:89] found id: ""
	I1009 20:12:16.308095  459237 logs.go:282] 0 containers: []
	W1009 20:12:16.308106  459237 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:12:16.308113  459237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:12:16.308180  459237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:12:16.335069  459237 cri.go:89] found id: ""
	I1009 20:12:16.335094  459237 logs.go:282] 0 containers: []
	W1009 20:12:16.335104  459237 logs.go:284] No container was found matching "kindnet"
	I1009 20:12:16.335114  459237 logs.go:123] Gathering logs for kubelet ...
	I1009 20:12:16.335152  459237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:12:16.425930  459237 logs.go:123] Gathering logs for dmesg ...
	I1009 20:12:16.425966  459237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:12:16.443924  459237 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:12:16.443953  459237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:12:16.520215  459237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 20:12:16.511800    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:16.512589    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:16.514110    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:16.514615    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:16.516156    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 20:12:16.511800    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:16.512589    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:16.514110    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:16.514615    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:16.516156    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:12:16.520240  459237 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:12:16.520254  459237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:12:16.597236  459237 logs.go:123] Gathering logs for container status ...
	I1009 20:12:16.597285  459237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 20:12:16.627047  459237 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.008580101s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000392632s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000121591s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000389745s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 20:12:16.627105  459237 out.go:285] * 
	* 
	W1009 20:12:16.627159  459237 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.008580101s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000392632s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000121591s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000389745s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.008580101s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000392632s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000121591s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000389745s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:12:16.627170  459237 out.go:285] * 
	* 
	W1009 20:12:16.629371  459237 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:12:16.637390  459237 out.go:203] 
	W1009 20:12:16.640345  459237 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.008580101s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000392632s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000121591s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000389745s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.008580101s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000392632s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000121591s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000389745s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:12:16.640373  459237 out.go:285] * 
	* 
	I1009 20:12:16.643585  459237 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-736218 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-736218 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-10-09 20:12:17.014152664 +0000 UTC m=+4340.618950451
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-flag-736218
helpers_test.go:243: (dbg) docker inspect force-systemd-flag-736218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9845e10a990601f6b91e2fe5cc4917d02fdd042bbcdc64c9a0afa0975aa1d05c",
	        "Created": "2025-10-09T20:03:51.022818283Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 459714,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:03:51.083278084Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/9845e10a990601f6b91e2fe5cc4917d02fdd042bbcdc64c9a0afa0975aa1d05c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9845e10a990601f6b91e2fe5cc4917d02fdd042bbcdc64c9a0afa0975aa1d05c/hostname",
	        "HostsPath": "/var/lib/docker/containers/9845e10a990601f6b91e2fe5cc4917d02fdd042bbcdc64c9a0afa0975aa1d05c/hosts",
	        "LogPath": "/var/lib/docker/containers/9845e10a990601f6b91e2fe5cc4917d02fdd042bbcdc64c9a0afa0975aa1d05c/9845e10a990601f6b91e2fe5cc4917d02fdd042bbcdc64c9a0afa0975aa1d05c-json.log",
	        "Name": "/force-systemd-flag-736218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-736218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-736218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9845e10a990601f6b91e2fe5cc4917d02fdd042bbcdc64c9a0afa0975aa1d05c",
	                "LowerDir": "/var/lib/docker/overlay2/302e023f97df28a42e519b2d1a49fbc918c76138eda4085359483da4bb09f15d-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/302e023f97df28a42e519b2d1a49fbc918c76138eda4085359483da4bb09f15d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/302e023f97df28a42e519b2d1a49fbc918c76138eda4085359483da4bb09f15d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/302e023f97df28a42e519b2d1a49fbc918c76138eda4085359483da4bb09f15d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-736218",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-736218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-736218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-736218",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-736218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "74f7d70107e5097cc329298b2f8d64b6bf3baed2badf0af553b050c49d64b17a",
	            "SandboxKey": "/var/run/docker/netns/74f7d70107e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33400"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33398"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33399"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-736218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:df:f7:62:06:68",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "36c4aa4738cfbbc1b62a024e474c67d3db4a120061013b176be239d5359a4c9f",
	                    "EndpointID": "ace686776a466a0e474078dba87ce4eb6435dc12eb3fb1306dc8718a48a64da4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-736218",
	                        "9845e10a9906"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-736218 -n force-systemd-flag-736218
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-736218 -n force-systemd-flag-736218: exit status 6 (300.981742ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:12:17.318371  467654 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-736218" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-736218 logs -n 25
helpers_test.go:260: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-535911 sudo systemctl cat kubelet --no-pager                                                     │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl status docker --all --full --no-pager                                      │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl cat docker --no-pager                                                      │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo cat /etc/docker/daemon.json                                                          │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo docker system info                                                                   │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo cri-dockerd --version                                                                │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl cat containerd --no-pager                                                  │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo cat /etc/containerd/config.toml                                                      │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo containerd config dump                                                               │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl status crio --all --full --no-pager                                        │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl cat crio --no-pager                                                        │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo crio config                                                                          │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ delete  │ -p cilium-535911                                                                                           │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │ 09 Oct 25 20:05 UTC │
	│ start   │ -p force-systemd-env-242564 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-242564  │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ force-systemd-flag-736218 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-736218 │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:05:54
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:05:54.747252  463914 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:05:54.747384  463914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:05:54.747396  463914 out.go:374] Setting ErrFile to fd 2...
	I1009 20:05:54.747401  463914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:05:54.747742  463914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:05:54.748216  463914 out.go:368] Setting JSON to false
	I1009 20:05:54.749240  463914 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10094,"bootTime":1760030261,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:05:54.749322  463914 start.go:143] virtualization:  
	I1009 20:05:54.752846  463914 out.go:179] * [force-systemd-env-242564] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:05:54.756558  463914 notify.go:221] Checking for updates...
	I1009 20:05:54.759574  463914 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:05:54.762912  463914 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:05:54.765790  463914 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:05:54.768676  463914 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:05:54.771565  463914 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:05:54.774390  463914 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1009 20:05:54.777915  463914 config.go:182] Loaded profile config "force-systemd-flag-736218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:05:54.778088  463914 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:05:54.812970  463914 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:05:54.813206  463914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:05:54.868152  463914 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:05:54.858875083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:05:54.868265  463914 docker.go:319] overlay module found
	I1009 20:05:54.871424  463914 out.go:179] * Using the docker driver based on user configuration
	I1009 20:05:54.874318  463914 start.go:309] selected driver: docker
	I1009 20:05:54.874336  463914 start.go:930] validating driver "docker" against <nil>
	I1009 20:05:54.874350  463914 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:05:54.875131  463914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:05:54.934141  463914 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:05:54.924820306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:05:54.934329  463914 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 20:05:54.934551  463914 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 20:05:54.937871  463914 out.go:179] * Using Docker driver with root privileges
	I1009 20:05:54.940788  463914 cni.go:84] Creating CNI manager for ""
	I1009 20:05:54.940876  463914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:05:54.940897  463914 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 20:05:54.940990  463914 start.go:353] cluster config:
	{Name:force-systemd-env-242564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-242564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:05:54.945214  463914 out.go:179] * Starting "force-systemd-env-242564" primary control-plane node in "force-systemd-env-242564" cluster
	I1009 20:05:54.948008  463914 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:05:54.950968  463914 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:05:54.953733  463914 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:05:54.953802  463914 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 20:05:54.953813  463914 cache.go:58] Caching tarball of preloaded images
	I1009 20:05:54.953846  463914 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:05:54.953927  463914 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 20:05:54.953938  463914 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 20:05:54.954044  463914 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/config.json ...
	I1009 20:05:54.954061  463914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/config.json: {Name:mkc0bcf42f8203a30a8b1921d806208cae48a73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:05:54.974075  463914 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:05:54.974104  463914 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:05:54.974118  463914 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:05:54.974141  463914 start.go:361] acquireMachinesLock for force-systemd-env-242564: {Name:mk389361bb03203729416af71489bf16c0efad4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:05:54.974254  463914 start.go:365] duration metric: took 92.472µs to acquireMachinesLock for "force-systemd-env-242564"
	I1009 20:05:54.974285  463914 start.go:94] Provisioning new machine with config: &{Name:force-systemd-env-242564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-242564 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:05:54.974361  463914 start.go:126] createHost starting for "" (driver="docker")
	I1009 20:05:54.977787  463914 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 20:05:54.978044  463914 start.go:160] libmachine.API.Create for "force-systemd-env-242564" (driver="docker")
	I1009 20:05:54.978093  463914 client.go:168] LocalClient.Create starting
	I1009 20:05:54.978197  463914 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem
	I1009 20:05:54.978237  463914 main.go:141] libmachine: Decoding PEM data...
	I1009 20:05:54.978260  463914 main.go:141] libmachine: Parsing certificate...
	I1009 20:05:54.978317  463914 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem
	I1009 20:05:54.978339  463914 main.go:141] libmachine: Decoding PEM data...
	I1009 20:05:54.978357  463914 main.go:141] libmachine: Parsing certificate...
	I1009 20:05:54.978742  463914 cli_runner.go:164] Run: docker network inspect force-systemd-env-242564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 20:05:54.994762  463914 cli_runner.go:211] docker network inspect force-systemd-env-242564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 20:05:54.994843  463914 network_create.go:284] running [docker network inspect force-systemd-env-242564] to gather additional debugging logs...
	I1009 20:05:54.994885  463914 cli_runner.go:164] Run: docker network inspect force-systemd-env-242564
	W1009 20:05:55.033565  463914 cli_runner.go:211] docker network inspect force-systemd-env-242564 returned with exit code 1
	I1009 20:05:55.033605  463914 network_create.go:287] error running [docker network inspect force-systemd-env-242564]: docker network inspect force-systemd-env-242564: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-242564 not found
	I1009 20:05:55.033621  463914 network_create.go:289] output of [docker network inspect force-systemd-env-242564]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-242564 not found
	
	** /stderr **
	I1009 20:05:55.034228  463914 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:05:55.061682  463914 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3847a6577684 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:b5:e6:7d:c7:ad} reservation:<nil>}
	I1009 20:05:55.062090  463914 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5742e12e0dad IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9e:82:91:fd:a6:fb} reservation:<nil>}
	I1009 20:05:55.062348  463914 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-11b099636187 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:bb:e5:1b:6d:a2} reservation:<nil>}
	I1009 20:05:55.062803  463914 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cdf20}
	I1009 20:05:55.062828  463914 network_create.go:124] attempt to create docker network force-systemd-env-242564 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1009 20:05:55.062898  463914 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-242564 force-systemd-env-242564
	I1009 20:05:55.134320  463914 network_create.go:108] docker network force-systemd-env-242564 192.168.76.0/24 created
	I1009 20:05:55.134352  463914 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-242564" container
	I1009 20:05:55.134430  463914 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 20:05:55.152257  463914 cli_runner.go:164] Run: docker volume create force-systemd-env-242564 --label name.minikube.sigs.k8s.io=force-systemd-env-242564 --label created_by.minikube.sigs.k8s.io=true
	I1009 20:05:55.172281  463914 oci.go:103] Successfully created a docker volume force-systemd-env-242564
	I1009 20:05:55.172390  463914 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-242564-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-242564 --entrypoint /usr/bin/test -v force-systemd-env-242564:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 20:05:55.695090  463914 oci.go:107] Successfully prepared a docker volume force-systemd-env-242564
	I1009 20:05:55.695138  463914 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:05:55.695157  463914 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 20:05:55.695237  463914 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-242564:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 20:06:00.262527  463914 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-242564:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.567223793s)
	I1009 20:06:00.262563  463914 kic.go:203] duration metric: took 4.567401468s to extract preloaded images to volume ...
	W1009 20:06:00.262771  463914 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 20:06:00.262898  463914 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 20:06:00.482583  463914 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-242564 --name force-systemd-env-242564 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-242564 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-242564 --network force-systemd-env-242564 --ip 192.168.76.2 --volume force-systemd-env-242564:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 20:06:00.872122  463914 cli_runner.go:164] Run: docker container inspect force-systemd-env-242564 --format={{.State.Running}}
	I1009 20:06:00.895257  463914 cli_runner.go:164] Run: docker container inspect force-systemd-env-242564 --format={{.State.Status}}
	I1009 20:06:00.921286  463914 cli_runner.go:164] Run: docker exec force-systemd-env-242564 stat /var/lib/dpkg/alternatives/iptables
	I1009 20:06:00.974472  463914 oci.go:144] the created container "force-systemd-env-242564" has a running status.
	I1009 20:06:00.974525  463914 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-env-242564/id_rsa...
	I1009 20:06:01.076845  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-env-242564/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 20:06:01.076945  463914 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-env-242564/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 20:06:01.098657  463914 cli_runner.go:164] Run: docker container inspect force-systemd-env-242564 --format={{.State.Status}}
	I1009 20:06:01.120922  463914 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 20:06:01.120943  463914 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-242564 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 20:06:01.178939  463914 cli_runner.go:164] Run: docker container inspect force-systemd-env-242564 --format={{.State.Status}}
	I1009 20:06:01.201388  463914 machine.go:93] provisionDockerMachine start ...
	I1009 20:06:01.201502  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:01.230004  463914 main.go:141] libmachine: Using SSH client type: native
	I1009 20:06:01.230347  463914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33401 <nil> <nil>}
	I1009 20:06:01.230356  463914 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:06:01.231155  463914 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 20:06:04.381410  463914 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-242564
	
	I1009 20:06:04.381520  463914 ubuntu.go:182] provisioning hostname "force-systemd-env-242564"
	I1009 20:06:04.381595  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:04.401787  463914 main.go:141] libmachine: Using SSH client type: native
	I1009 20:06:04.402093  463914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33401 <nil> <nil>}
	I1009 20:06:04.402110  463914 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-242564 && echo "force-systemd-env-242564" | sudo tee /etc/hostname
	I1009 20:06:04.559896  463914 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-242564
	
	I1009 20:06:04.560020  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:04.579898  463914 main.go:141] libmachine: Using SSH client type: native
	I1009 20:06:04.580228  463914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33401 <nil> <nil>}
	I1009 20:06:04.580250  463914 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-242564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-242564/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-242564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:06:04.730002  463914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:06:04.730030  463914 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:06:04.730051  463914 ubuntu.go:190] setting up certificates
	I1009 20:06:04.730062  463914 provision.go:84] configureAuth start
	I1009 20:06:04.730147  463914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-242564
	I1009 20:06:04.748978  463914 provision.go:143] copyHostCerts
	I1009 20:06:04.749022  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:06:04.749056  463914 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:06:04.749069  463914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:06:04.749308  463914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:06:04.749402  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:06:04.749428  463914 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:06:04.749434  463914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:06:04.749468  463914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:06:04.749523  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:06:04.749550  463914 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:06:04.749560  463914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:06:04.749587  463914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:06:04.749648  463914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-242564 san=[127.0.0.1 192.168.76.2 force-systemd-env-242564 localhost minikube]
	I1009 20:06:05.190068  463914 provision.go:177] copyRemoteCerts
	I1009 20:06:05.190147  463914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:06:05.190193  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:05.209080  463914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33401 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-env-242564/id_rsa Username:docker}
	I1009 20:06:05.317618  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 20:06:05.317694  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:06:05.336728  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 20:06:05.336808  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1009 20:06:05.356734  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 20:06:05.356812  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:06:05.376260  463914 provision.go:87] duration metric: took 646.171998ms to configureAuth
	I1009 20:06:05.376337  463914 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:06:05.376562  463914 config.go:182] Loaded profile config "force-systemd-env-242564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:06:05.376693  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:05.400562  463914 main.go:141] libmachine: Using SSH client type: native
	I1009 20:06:05.400916  463914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33401 <nil> <nil>}
	I1009 20:06:05.400940  463914 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:06:05.674596  463914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:06:05.674625  463914 machine.go:96] duration metric: took 4.473216751s to provisionDockerMachine
	I1009 20:06:05.674644  463914 client.go:171] duration metric: took 10.696531805s to LocalClient.Create
	I1009 20:06:05.674669  463914 start.go:168] duration metric: took 10.696626542s to libmachine.API.Create "force-systemd-env-242564"
	I1009 20:06:05.674682  463914 start.go:294] postStartSetup for "force-systemd-env-242564" (driver="docker")
	I1009 20:06:05.674695  463914 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:06:05.674779  463914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:06:05.674857  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:05.693346  463914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33401 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-env-242564/id_rsa Username:docker}
	I1009 20:06:05.797946  463914 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:06:05.801790  463914 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:06:05.801822  463914 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:06:05.801835  463914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:06:05.801895  463914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:06:05.802001  463914 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:06:05.802014  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /etc/ssl/certs/2960022.pem
	I1009 20:06:05.802123  463914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:06:05.810702  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:06:05.831074  463914 start.go:297] duration metric: took 156.376213ms for postStartSetup
	I1009 20:06:05.831515  463914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-242564
	I1009 20:06:05.849403  463914 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/config.json ...
	I1009 20:06:05.849709  463914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:06:05.849762  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:05.867718  463914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33401 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-env-242564/id_rsa Username:docker}
	I1009 20:06:05.970503  463914 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:06:05.975635  463914 start.go:129] duration metric: took 11.001255597s to createHost
	I1009 20:06:05.975669  463914 start.go:84] releasing machines lock for "force-systemd-env-242564", held for 11.001395528s
	I1009 20:06:05.975748  463914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-242564
	I1009 20:06:05.993146  463914 ssh_runner.go:195] Run: cat /version.json
	I1009 20:06:05.993160  463914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:06:05.993201  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:05.993201  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:06.014732  463914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33401 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-env-242564/id_rsa Username:docker}
	I1009 20:06:06.015309  463914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33401 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-env-242564/id_rsa Username:docker}
	I1009 20:06:06.117421  463914 ssh_runner.go:195] Run: systemctl --version
	I1009 20:06:06.208163  463914 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:06:06.246019  463914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:06:06.250453  463914 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:06:06.250528  463914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:06:06.281368  463914 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 20:06:06.281435  463914 start.go:496] detecting cgroup driver to use...
	I1009 20:06:06.281467  463914 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1009 20:06:06.281545  463914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:06:06.300774  463914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:06:06.314523  463914 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:06:06.314593  463914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:06:06.332924  463914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:06:06.353792  463914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:06:06.480352  463914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:06:06.609436  463914 docker.go:234] disabling docker service ...
	I1009 20:06:06.609526  463914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:06:06.636198  463914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:06:06.650422  463914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:06:06.771294  463914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:06:06.896845  463914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:06:06.911573  463914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:06:06.931618  463914 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:06:06.931726  463914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:06:06.942007  463914 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 20:06:06.942137  463914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:06:06.952105  463914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:06:06.962001  463914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:06:06.972166  463914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:06:06.981181  463914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:06:06.991146  463914 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:06:07.013226  463914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:06:07.023977  463914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:06:07.032583  463914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:06:07.041083  463914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:06:07.163127  463914 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:06:07.285208  463914 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:06:07.285299  463914 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:06:07.289599  463914 start.go:564] Will wait 60s for crictl version
	I1009 20:06:07.289677  463914 ssh_runner.go:195] Run: which crictl
	I1009 20:06:07.293824  463914 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:06:07.318107  463914 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:06:07.318210  463914 ssh_runner.go:195] Run: crio --version
	I1009 20:06:07.346996  463914 ssh_runner.go:195] Run: crio --version
	I1009 20:06:07.378811  463914 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 20:06:07.381556  463914 cli_runner.go:164] Run: docker network inspect force-systemd-env-242564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:06:07.397512  463914 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 20:06:07.401411  463914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:06:07.412022  463914 kubeadm.go:883] updating cluster {Name:force-systemd-env-242564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-242564 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:06:07.412134  463914 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:06:07.412196  463914 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:06:07.445552  463914 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:06:07.445579  463914 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:06:07.445637  463914 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:06:07.471831  463914 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:06:07.471854  463914 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:06:07.471862  463914 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 20:06:07.471949  463914 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-242564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-242564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:06:07.472047  463914 ssh_runner.go:195] Run: crio config
	I1009 20:06:07.526282  463914 cni.go:84] Creating CNI manager for ""
	I1009 20:06:07.526313  463914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:06:07.526328  463914 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 20:06:07.526352  463914 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-242564 NodeName:force-systemd-env-242564 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:06:07.526495  463914 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-242564"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:06:07.526573  463914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:06:07.534833  463914 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:06:07.534928  463914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:06:07.543008  463914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1009 20:06:07.556545  463914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:06:07.570233  463914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1009 20:06:07.583846  463914 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:06:07.587703  463914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:06:07.598072  463914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:06:07.716662  463914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:06:07.734436  463914 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564 for IP: 192.168.76.2
	I1009 20:06:07.734457  463914 certs.go:195] generating shared ca certs ...
	I1009 20:06:07.734475  463914 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:06:07.734623  463914 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:06:07.734672  463914 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:06:07.734679  463914 certs.go:257] generating profile certs ...
	I1009 20:06:07.734738  463914 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/client.key
	I1009 20:06:07.734759  463914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/client.crt with IP's: []
	I1009 20:06:08.024348  463914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/client.crt ...
	I1009 20:06:08.024383  463914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/client.crt: {Name:mkac7553ab0c16405ffc27546b65113c7f4ec0e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:06:08.024607  463914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/client.key ...
	I1009 20:06:08.024624  463914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/client.key: {Name:mk1cc6c9da7266a73cd13f6fad0728d53ee5d5fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:06:08.024731  463914 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.key.2ea16d24
	I1009 20:06:08.024760  463914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.crt.2ea16d24 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1009 20:06:08.229356  463914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.crt.2ea16d24 ...
	I1009 20:06:08.229382  463914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.crt.2ea16d24: {Name:mk4dd28c4e24130f604f555d5ab54edf3b3b56a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:06:08.229564  463914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.key.2ea16d24 ...
	I1009 20:06:08.229575  463914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.key.2ea16d24: {Name:mk64c7b57a8b25ea4f23f281fac9e031e57283d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:06:08.229648  463914 certs.go:382] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.crt.2ea16d24 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.crt
	I1009 20:06:08.229726  463914 certs.go:386] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.key.2ea16d24 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.key
	I1009 20:06:08.229782  463914 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.key
	I1009 20:06:08.229795  463914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.crt with IP's: []
	I1009 20:06:08.441569  463914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.crt ...
	I1009 20:06:08.441604  463914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.crt: {Name:mk1025018a7b705a66311ca022ea068a7e69f3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:06:08.441796  463914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.key ...
	I1009 20:06:08.441813  463914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.key: {Name:mkce3069b8670390eb918e89e064261c67730036 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:06:08.441914  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 20:06:08.441942  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 20:06:08.441959  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 20:06:08.441976  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 20:06:08.441989  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 20:06:08.442005  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 20:06:08.442016  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 20:06:08.442030  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 20:06:08.442083  463914 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:06:08.442134  463914 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:06:08.442150  463914 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:06:08.442176  463914 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:06:08.442205  463914 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:06:08.442231  463914 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:06:08.442278  463914 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:06:08.442315  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:06:08.442338  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem -> /usr/share/ca-certificates/296002.pem
	I1009 20:06:08.442349  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /usr/share/ca-certificates/2960022.pem
	I1009 20:06:08.442901  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:06:08.464467  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:06:08.488580  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:06:08.510777  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:06:08.532531  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1009 20:06:08.552406  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:06:08.572012  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:06:08.591275  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:06:08.610873  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:06:08.629960  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:06:08.649783  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:06:08.669010  463914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:06:08.683224  463914 ssh_runner.go:195] Run: openssl version
	I1009 20:06:08.690103  463914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:06:08.699857  463914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:06:08.704320  463914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:06:08.704434  463914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:06:08.746184  463914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:06:08.755252  463914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:06:08.764547  463914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:06:08.768704  463914 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:06:08.768770  463914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:06:08.810565  463914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:06:08.821431  463914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:06:08.836543  463914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:06:08.843734  463914 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:06:08.843857  463914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:06:08.887169  463914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:06:08.896187  463914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:06:08.900104  463914 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 20:06:08.900157  463914 kubeadm.go:400] StartCluster: {Name:force-systemd-env-242564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-242564 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:06:08.900244  463914 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:06:08.900313  463914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:06:08.931899  463914 cri.go:89] found id: ""
	I1009 20:06:08.931994  463914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:06:08.940554  463914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:06:08.949025  463914 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 20:06:08.949191  463914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:06:08.957982  463914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:06:08.958004  463914 kubeadm.go:157] found existing configuration files:
	
	I1009 20:06:08.958062  463914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:06:08.966732  463914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:06:08.966895  463914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:06:08.975126  463914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:06:08.983527  463914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:06:08.983598  463914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:06:08.991881  463914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:06:09.008567  463914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:06:09.008717  463914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:06:09.018416  463914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:06:09.028196  463914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:06:09.028318  463914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:06:09.036787  463914 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 20:06:09.081803  463914 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 20:06:09.082052  463914 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 20:06:09.108448  463914 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 20:06:09.108663  463914 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 20:06:09.108752  463914 kubeadm.go:318] OS: Linux
	I1009 20:06:09.108843  463914 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 20:06:09.108913  463914 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 20:06:09.108969  463914 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 20:06:09.109024  463914 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 20:06:09.109079  463914 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 20:06:09.109160  463914 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 20:06:09.109214  463914 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 20:06:09.109265  463914 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 20:06:09.109318  463914 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 20:06:09.191238  463914 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:06:09.191445  463914 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:06:09.191598  463914 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:06:09.200453  463914 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:06:09.207533  463914 out.go:252]   - Generating certificates and keys ...
	I1009 20:06:09.207635  463914 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 20:06:09.207708  463914 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 20:06:10.135936  463914 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 20:06:10.778970  463914 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 20:06:10.932017  463914 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 20:06:11.387770  463914 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 20:06:12.023530  463914 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 20:06:12.023713  463914 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-242564 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 20:06:12.804124  463914 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 20:06:12.804365  463914 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-242564 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 20:06:13.304668  463914 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 20:06:13.505799  463914 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 20:06:13.952295  463914 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 20:06:13.952661  463914 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:06:14.498354  463914 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:06:15.390231  463914 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:06:15.651517  463914 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:06:16.659300  463914 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:06:16.719843  463914 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:06:16.721176  463914 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:06:16.732911  463914 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:06:16.736873  463914 out.go:252]   - Booting up control plane ...
	I1009 20:06:16.737014  463914 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:06:16.741667  463914 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:06:16.743302  463914 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:06:16.764014  463914 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:06:16.764130  463914 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 20:06:16.772206  463914 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 20:06:16.772572  463914 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:06:16.772821  463914 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 20:06:16.898589  463914 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:06:16.898723  463914 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:06:18.401472  463914 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500880876s
	I1009 20:06:18.403070  463914 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 20:06:18.403185  463914 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1009 20:06:18.403302  463914 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 20:06:18.403395  463914 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 20:08:11.916796  459237 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00016937s
	I1009 20:08:11.917050  459237 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000401044s
	I1009 20:08:11.917630  459237 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000599289s
	I1009 20:08:11.917649  459237 kubeadm.go:318] 
	I1009 20:08:11.917754  459237 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 20:08:11.917851  459237 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:08:11.917950  459237 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 20:08:11.918058  459237 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:08:11.918145  459237 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 20:08:11.918229  459237 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:08:11.918237  459237 kubeadm.go:318] 
	I1009 20:08:11.923392  459237 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 20:08:11.923630  459237 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 20:08:11.923747  459237 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:08:11.924330  459237 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 20:08:11.924425  459237 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 20:08:11.924578  459237 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-736218 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-736218 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001080756s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.00016937s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000401044s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000599289s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 20:08:11.924660  459237 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:08:12.457628  459237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:08:12.471972  459237 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 20:08:12.472037  459237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:08:12.480269  459237 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:08:12.480292  459237 kubeadm.go:157] found existing configuration files:
	
	I1009 20:08:12.480344  459237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:08:12.488611  459237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:08:12.488677  459237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:08:12.496453  459237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:08:12.505181  459237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:08:12.505303  459237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:08:12.513042  459237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:08:12.520935  459237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:08:12.521004  459237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:08:12.528868  459237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:08:12.536615  459237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:08:12.536745  459237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:08:12.544163  459237 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 20:08:12.608800  459237 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 20:08:12.609040  459237 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 20:08:12.679432  459237 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:10:18.404163  463914 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000912981s
	I1009 20:10:18.405430  463914 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001005692s
	I1009 20:10:18.405535  463914 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000755345s
	I1009 20:10:18.405548  463914 kubeadm.go:318] 
	I1009 20:10:18.405666  463914 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 20:10:18.405785  463914 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:10:18.405896  463914 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 20:10:18.405997  463914 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:10:18.406076  463914 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 20:10:18.406159  463914 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:10:18.406164  463914 kubeadm.go:318] 
	I1009 20:10:18.410515  463914 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 20:10:18.410810  463914 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 20:10:18.410965  463914 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:10:18.411653  463914 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 20:10:18.411733  463914 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 20:10:18.411897  463914 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-242564 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-242564 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500880876s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000912981s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001005692s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000755345s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 20:10:18.411988  463914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:10:18.963132  463914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:10:18.977619  463914 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 20:10:18.977689  463914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:10:18.986534  463914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:10:18.986558  463914 kubeadm.go:157] found existing configuration files:
	
	I1009 20:10:18.986612  463914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:10:18.994753  463914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:10:18.994821  463914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:10:19.004493  463914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:10:19.013713  463914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:10:19.013838  463914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:10:19.021740  463914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:10:19.030152  463914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:10:19.030234  463914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:10:19.038956  463914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:10:19.046954  463914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:10:19.047020  463914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:10:19.054922  463914 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 20:10:19.096279  463914 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 20:10:19.096512  463914 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 20:10:19.120034  463914 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 20:10:19.120114  463914 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 20:10:19.120157  463914 kubeadm.go:318] OS: Linux
	I1009 20:10:19.120210  463914 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 20:10:19.120265  463914 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 20:10:19.120318  463914 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 20:10:19.120372  463914 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 20:10:19.120427  463914 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 20:10:19.120481  463914 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 20:10:19.120532  463914 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 20:10:19.120585  463914 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 20:10:19.120637  463914 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 20:10:19.190203  463914 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:10:19.190333  463914 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:10:19.190435  463914 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:10:19.201592  463914 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:10:19.209271  463914 out.go:252]   - Generating certificates and keys ...
	I1009 20:10:19.209384  463914 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 20:10:19.209465  463914 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 20:10:19.209563  463914 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:10:19.209650  463914 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:10:19.209737  463914 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:10:19.209808  463914 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 20:10:19.209887  463914 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:10:19.209964  463914 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:10:19.210054  463914 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:10:19.210144  463914 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:10:19.210201  463914 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 20:10:19.210277  463914 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:10:19.480077  463914 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:10:19.682912  463914 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:10:19.987845  463914 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:10:20.479142  463914 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:10:20.795749  463914 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:10:20.796414  463914 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:10:20.799040  463914 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:10:20.802581  463914 out.go:252]   - Booting up control plane ...
	I1009 20:10:20.802691  463914 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:10:20.802778  463914 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:10:20.802853  463914 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:10:20.818818  463914 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:10:20.818937  463914 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 20:10:20.826928  463914 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 20:10:20.827278  463914 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:10:20.827549  463914 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 20:10:20.976098  463914 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:10:20.976232  463914 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:10:22.475447  463914 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501689024s
	I1009 20:10:22.479240  463914 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 20:10:22.479350  463914 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1009 20:10:22.479453  463914 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 20:10:22.479544  463914 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 20:12:16.118091  459237 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused]
	I1009 20:12:16.118199  459237 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:12:16.121988  459237 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 20:12:16.122071  459237 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 20:12:16.122173  459237 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 20:12:16.122236  459237 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 20:12:16.122279  459237 kubeadm.go:318] OS: Linux
	I1009 20:12:16.122330  459237 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 20:12:16.122387  459237 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 20:12:16.122445  459237 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 20:12:16.122500  459237 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 20:12:16.122555  459237 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 20:12:16.122613  459237 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 20:12:16.122672  459237 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 20:12:16.122726  459237 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 20:12:16.122777  459237 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 20:12:16.122856  459237 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:12:16.122958  459237 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:12:16.123054  459237 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:12:16.123129  459237 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:12:16.126417  459237 out.go:252]   - Generating certificates and keys ...
	I1009 20:12:16.126534  459237 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 20:12:16.126631  459237 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 20:12:16.126763  459237 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:12:16.126829  459237 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:12:16.126911  459237 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:12:16.126986  459237 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 20:12:16.127072  459237 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:12:16.127155  459237 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:12:16.127239  459237 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:12:16.127317  459237 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:12:16.127361  459237 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 20:12:16.127422  459237 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:12:16.127477  459237 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:12:16.127538  459237 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:12:16.127596  459237 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:12:16.127661  459237 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:12:16.127715  459237 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:12:16.127802  459237 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:12:16.127871  459237 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:12:16.130895  459237 out.go:252]   - Booting up control plane ...
	I1009 20:12:16.131019  459237 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:12:16.131109  459237 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:12:16.131188  459237 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:12:16.131303  459237 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:12:16.131407  459237 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 20:12:16.131520  459237 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 20:12:16.131612  459237 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:12:16.131654  459237 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 20:12:16.131791  459237 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:12:16.131900  459237 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:12:16.131961  459237 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.008580101s
	I1009 20:12:16.132079  459237 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 20:12:16.132222  459237 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1009 20:12:16.132319  459237 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 20:12:16.132402  459237 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 20:12:16.132478  459237 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000392632s
	I1009 20:12:16.132564  459237 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000121591s
	I1009 20:12:16.132641  459237 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000389745s
	I1009 20:12:16.132646  459237 kubeadm.go:318] 
	I1009 20:12:16.132741  459237 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 20:12:16.132827  459237 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:12:16.132920  459237 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 20:12:16.133018  459237 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:12:16.133096  459237 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 20:12:16.133378  459237 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:12:16.133422  459237 kubeadm.go:318] 
	I1009 20:12:16.133477  459237 kubeadm.go:402] duration metric: took 8m14.682125173s to StartCluster
	I1009 20:12:16.133531  459237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:12:16.133607  459237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:12:16.161554  459237 cri.go:89] found id: ""
	I1009 20:12:16.161588  459237 logs.go:282] 0 containers: []
	W1009 20:12:16.161597  459237 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:12:16.161605  459237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:12:16.161671  459237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:12:16.190207  459237 cri.go:89] found id: ""
	I1009 20:12:16.190238  459237 logs.go:282] 0 containers: []
	W1009 20:12:16.190247  459237 logs.go:284] No container was found matching "etcd"
	I1009 20:12:16.190253  459237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:12:16.190317  459237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:12:16.220534  459237 cri.go:89] found id: ""
	I1009 20:12:16.220558  459237 logs.go:282] 0 containers: []
	W1009 20:12:16.220567  459237 logs.go:284] No container was found matching "coredns"
	I1009 20:12:16.220574  459237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:12:16.220635  459237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:12:16.247774  459237 cri.go:89] found id: ""
	I1009 20:12:16.247799  459237 logs.go:282] 0 containers: []
	W1009 20:12:16.247808  459237 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:12:16.247815  459237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:12:16.247877  459237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:12:16.275360  459237 cri.go:89] found id: ""
	I1009 20:12:16.275385  459237 logs.go:282] 0 containers: []
	W1009 20:12:16.275394  459237 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:12:16.275401  459237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:12:16.275466  459237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:12:16.308069  459237 cri.go:89] found id: ""
	I1009 20:12:16.308095  459237 logs.go:282] 0 containers: []
	W1009 20:12:16.308106  459237 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:12:16.308113  459237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:12:16.308180  459237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:12:16.335069  459237 cri.go:89] found id: ""
	I1009 20:12:16.335094  459237 logs.go:282] 0 containers: []
	W1009 20:12:16.335104  459237 logs.go:284] No container was found matching "kindnet"
	I1009 20:12:16.335114  459237 logs.go:123] Gathering logs for kubelet ...
	I1009 20:12:16.335152  459237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:12:16.425930  459237 logs.go:123] Gathering logs for dmesg ...
	I1009 20:12:16.425966  459237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:12:16.443924  459237 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:12:16.443953  459237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:12:16.520215  459237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 20:12:16.511800    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:16.512589    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:16.514110    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:16.514615    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:16.516156    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 20:12:16.511800    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:16.512589    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:16.514110    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:16.514615    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:16.516156    2364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:12:16.520240  459237 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:12:16.520254  459237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:12:16.597236  459237 logs.go:123] Gathering logs for container status ...
	I1009 20:12:16.597285  459237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 20:12:16.627047  459237 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.008580101s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000392632s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000121591s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000389745s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 20:12:16.627105  459237 out.go:285] * 
	W1009 20:12:16.627159  459237 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.008580101s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000392632s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000121591s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000389745s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:12:16.627170  459237 out.go:285] * 
	W1009 20:12:16.629371  459237 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:12:16.637390  459237 out.go:203] 
	W1009 20:12:16.640345  459237 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.008580101s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000392632s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000121591s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000389745s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:12:16.640373  459237 out.go:285] * 
	I1009 20:12:16.643585  459237 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 20:12:05 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:05.133099587Z" level=info msg="createCtr: removing container 262f9c0c79473f63af04ef1075a1035b8f94d9a6ab60c6896a317298492290b6" id=75cd64e2-5468-4d9a-9a7b-eb16fed4c4b1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:12:05 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:05.133187515Z" level=info msg="createCtr: deleting container 262f9c0c79473f63af04ef1075a1035b8f94d9a6ab60c6896a317298492290b6 from storage" id=75cd64e2-5468-4d9a-9a7b-eb16fed4c4b1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:12:05 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:05.135968923Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-force-systemd-flag-736218_kube-system_c3b8fa75ac166b86411596aec6c85326_0" id=75cd64e2-5468-4d9a-9a7b-eb16fed4c4b1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:12:07 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:07.111215654Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=77c9487f-872c-415d-9cf9-5a312cc57898 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:12:07 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:07.112062455Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=0b1334f0-918d-49bf-9f55-4f3e98fb0170 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:12:07 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:07.11302568Z" level=info msg="Creating container: kube-system/etcd-force-systemd-flag-736218/etcd" id=c301ab91-93d5-404e-9071-2b0d052b71c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:12:07 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:07.113387443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:12:07 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:07.117836188Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:12:07 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:07.118345177Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:12:07 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:07.129046925Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c301ab91-93d5-404e-9071-2b0d052b71c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:12:07 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:07.130204181Z" level=info msg="createCtr: deleting container ID 6560f2dcc6ad54ce26f393a0d1173cb502e5e547a3379210c31788d6e8fc98a8 from idIndex" id=c301ab91-93d5-404e-9071-2b0d052b71c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:12:07 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:07.130280983Z" level=info msg="createCtr: removing container 6560f2dcc6ad54ce26f393a0d1173cb502e5e547a3379210c31788d6e8fc98a8" id=c301ab91-93d5-404e-9071-2b0d052b71c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:12:07 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:07.130318546Z" level=info msg="createCtr: deleting container 6560f2dcc6ad54ce26f393a0d1173cb502e5e547a3379210c31788d6e8fc98a8 from storage" id=c301ab91-93d5-404e-9071-2b0d052b71c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:12:07 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:07.133387238Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-force-systemd-flag-736218_kube-system_2f8eb6ce3e1258b4f710d89611befd14_0" id=c301ab91-93d5-404e-9071-2b0d052b71c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:12:10 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:10.111411186Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=a894e7cb-5ff3-4dec-8376-00ebbb50a992 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:12:10 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:10.112428698Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=52f55287-7ed0-4e9a-a593-9285c389802e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:12:10 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:10.113623542Z" level=info msg="Creating container: kube-system/kube-apiserver-force-systemd-flag-736218/kube-apiserver" id=33be8789-0b3e-43d4-9c95-d97c35c4a5f5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:12:10 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:10.113901643Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:12:10 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:10.118645917Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:12:10 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:10.119328744Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:12:10 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:10.130024666Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=33be8789-0b3e-43d4-9c95-d97c35c4a5f5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:12:10 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:10.131865856Z" level=info msg="createCtr: deleting container ID 831bc365f9c2c65afc770b3923c1806755c2523151e62020093ebd785b7c0df3 from idIndex" id=33be8789-0b3e-43d4-9c95-d97c35c4a5f5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:12:10 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:10.13191316Z" level=info msg="createCtr: removing container 831bc365f9c2c65afc770b3923c1806755c2523151e62020093ebd785b7c0df3" id=33be8789-0b3e-43d4-9c95-d97c35c4a5f5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:12:10 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:10.131957657Z" level=info msg="createCtr: deleting container 831bc365f9c2c65afc770b3923c1806755c2523151e62020093ebd785b7c0df3 from storage" id=33be8789-0b3e-43d4-9c95-d97c35c4a5f5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:12:10 force-systemd-flag-736218 crio[837]: time="2025-10-09T20:12:10.136125412Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-force-systemd-flag-736218_kube-system_b4497812c5e82bcab31d09fe9a5b3659_0" id=33be8789-0b3e-43d4-9c95-d97c35c4a5f5 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 20:12:17.995736    2484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:17.996357    2484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:17.998065    2484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:17.998685    2484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:12:18.004538    2484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	[  +4.492991] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:38] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:40] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:45] overlayfs: idmapped layers are currently not supported
	[ +36.012100] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:47] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:50] overlayfs: idmapped layers are currently not supported
	[ +27.967875] overlayfs: idmapped layers are currently not supported
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:04] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:06] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:12:18 up  2:54,  0 user,  load average: 0.44, 0.82, 1.45
	Linux force-systemd-flag-736218 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 20:12:05 force-systemd-flag-736218 kubelet[1786]: E1009 20:12:05.904130    1786 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.85.2:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="force-systemd-flag-736218"
	Oct 09 20:12:06 force-systemd-flag-736218 kubelet[1786]: E1009 20:12:06.148853    1786 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-flag-736218\" not found"
	Oct 09 20:12:07 force-systemd-flag-736218 kubelet[1786]: E1009 20:12:07.110785    1786 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-736218\" not found" node="force-systemd-flag-736218"
	Oct 09 20:12:07 force-systemd-flag-736218 kubelet[1786]: E1009 20:12:07.133672    1786 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 20:12:07 force-systemd-flag-736218 kubelet[1786]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 20:12:07 force-systemd-flag-736218 kubelet[1786]:  > podSandboxID="aabe1cbc7395e9af73add9906cd4834bc9f0033d327a6d6c9b208111c51df977"
	Oct 09 20:12:07 force-systemd-flag-736218 kubelet[1786]: E1009 20:12:07.133809    1786 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 20:12:07 force-systemd-flag-736218 kubelet[1786]:         container etcd start failed in pod etcd-force-systemd-flag-736218_kube-system(2f8eb6ce3e1258b4f710d89611befd14): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 20:12:07 force-systemd-flag-736218 kubelet[1786]:  > logger="UnhandledError"
	Oct 09 20:12:07 force-systemd-flag-736218 kubelet[1786]: E1009 20:12:07.133843    1786 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-force-systemd-flag-736218" podUID="2f8eb6ce3e1258b4f710d89611befd14"
	Oct 09 20:12:09 force-systemd-flag-736218 kubelet[1786]: E1009 20:12:09.141951    1786 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.85.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 09 20:12:10 force-systemd-flag-736218 kubelet[1786]: E1009 20:12:10.110906    1786 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-736218\" not found" node="force-systemd-flag-736218"
	Oct 09 20:12:10 force-systemd-flag-736218 kubelet[1786]: E1009 20:12:10.136503    1786 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 20:12:10 force-systemd-flag-736218 kubelet[1786]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 20:12:10 force-systemd-flag-736218 kubelet[1786]:  > podSandboxID="4ef36a8e661063c44899a310db98928bf379d9e24cce1fe8108888979e181adc"
	Oct 09 20:12:10 force-systemd-flag-736218 kubelet[1786]: E1009 20:12:10.136627    1786 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 20:12:10 force-systemd-flag-736218 kubelet[1786]:         container kube-apiserver start failed in pod kube-apiserver-force-systemd-flag-736218_kube-system(b4497812c5e82bcab31d09fe9a5b3659): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 20:12:10 force-systemd-flag-736218 kubelet[1786]:  > logger="UnhandledError"
	Oct 09 20:12:10 force-systemd-flag-736218 kubelet[1786]: E1009 20:12:10.136662    1786 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-force-systemd-flag-736218" podUID="b4497812c5e82bcab31d09fe9a5b3659"
	Oct 09 20:12:10 force-systemd-flag-736218 kubelet[1786]: E1009 20:12:10.346325    1786 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.85.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dforce-systemd-flag-736218&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 09 20:12:12 force-systemd-flag-736218 kubelet[1786]: E1009 20:12:12.708263    1786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.85.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-flag-736218?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="7s"
	Oct 09 20:12:12 force-systemd-flag-736218 kubelet[1786]: I1009 20:12:12.905652    1786 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-flag-736218"
	Oct 09 20:12:12 force-systemd-flag-736218 kubelet[1786]: E1009 20:12:12.906017    1786 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.85.2:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="force-systemd-flag-736218"
	Oct 09 20:12:14 force-systemd-flag-736218 kubelet[1786]: E1009 20:12:14.494737    1786 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.85.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.85.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-flag-736218.186ceb8190fffc8b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-flag-736218,UID:force-systemd-flag-736218,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-flag-736218 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-flag-736218,},FirstTimestamp:2025-10-09 20:08:16.111025291 +0000 UTC m=+1.009332698,LastTimestamp:2025-10-09 20:08:16.111025291 +0000 UTC m=+1.009332698,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:k
ubelet,ReportingInstance:force-systemd-flag-736218,}"
	Oct 09 20:12:16 force-systemd-flag-736218 kubelet[1786]: E1009 20:12:16.149743    1786 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-flag-736218\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-736218 -n force-systemd-flag-736218
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-736218 -n force-systemd-flag-736218: exit status 6 (337.781706ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:12:18.490061  467877 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-736218" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-flag-736218" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-736218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-736218
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-736218: (1.956503219s)
--- FAIL: TestForceSystemdFlag (515.98s)

                                                
                                    
x
+
TestForceSystemdEnv (511.84s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-242564 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1009 20:06:21.976087  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:09:14.730755  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:11:21.979162  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-242564 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m28.323111505s)

                                                
                                                
-- stdout --
	* [force-systemd-env-242564] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-242564" primary control-plane node in "force-systemd-env-242564" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 20:05:54.747252  463914 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:05:54.747384  463914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:05:54.747396  463914 out.go:374] Setting ErrFile to fd 2...
	I1009 20:05:54.747401  463914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:05:54.747742  463914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:05:54.748216  463914 out.go:368] Setting JSON to false
	I1009 20:05:54.749240  463914 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10094,"bootTime":1760030261,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:05:54.749322  463914 start.go:143] virtualization:  
	I1009 20:05:54.752846  463914 out.go:179] * [force-systemd-env-242564] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:05:54.756558  463914 notify.go:221] Checking for updates...
	I1009 20:05:54.759574  463914 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:05:54.762912  463914 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:05:54.765790  463914 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:05:54.768676  463914 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:05:54.771565  463914 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:05:54.774390  463914 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1009 20:05:54.777915  463914 config.go:182] Loaded profile config "force-systemd-flag-736218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:05:54.778088  463914 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:05:54.812970  463914 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:05:54.813206  463914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:05:54.868152  463914 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:05:54.858875083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:05:54.868265  463914 docker.go:319] overlay module found
	I1009 20:05:54.871424  463914 out.go:179] * Using the docker driver based on user configuration
	I1009 20:05:54.874318  463914 start.go:309] selected driver: docker
	I1009 20:05:54.874336  463914 start.go:930] validating driver "docker" against <nil>
	I1009 20:05:54.874350  463914 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:05:54.875131  463914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:05:54.934141  463914 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:05:54.924820306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:05:54.934329  463914 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 20:05:54.934551  463914 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 20:05:54.937871  463914 out.go:179] * Using Docker driver with root privileges
	I1009 20:05:54.940788  463914 cni.go:84] Creating CNI manager for ""
	I1009 20:05:54.940876  463914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:05:54.940897  463914 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 20:05:54.940990  463914 start.go:353] cluster config:
	{Name:force-systemd-env-242564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-242564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:05:54.945214  463914 out.go:179] * Starting "force-systemd-env-242564" primary control-plane node in "force-systemd-env-242564" cluster
	I1009 20:05:54.948008  463914 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:05:54.950968  463914 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:05:54.953733  463914 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:05:54.953802  463914 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 20:05:54.953813  463914 cache.go:58] Caching tarball of preloaded images
	I1009 20:05:54.953846  463914 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:05:54.953927  463914 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 20:05:54.953938  463914 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 20:05:54.954044  463914 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/config.json ...
	I1009 20:05:54.954061  463914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/config.json: {Name:mkc0bcf42f8203a30a8b1921d806208cae48a73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:05:54.974075  463914 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:05:54.974104  463914 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:05:54.974118  463914 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:05:54.974141  463914 start.go:361] acquireMachinesLock for force-systemd-env-242564: {Name:mk389361bb03203729416af71489bf16c0efad4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:05:54.974254  463914 start.go:365] duration metric: took 92.472µs to acquireMachinesLock for "force-systemd-env-242564"
	I1009 20:05:54.974285  463914 start.go:94] Provisioning new machine with config: &{Name:force-systemd-env-242564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-242564 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:05:54.974361  463914 start.go:126] createHost starting for "" (driver="docker")
	I1009 20:05:54.977787  463914 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 20:05:54.978044  463914 start.go:160] libmachine.API.Create for "force-systemd-env-242564" (driver="docker")
	I1009 20:05:54.978093  463914 client.go:168] LocalClient.Create starting
	I1009 20:05:54.978197  463914 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem
	I1009 20:05:54.978237  463914 main.go:141] libmachine: Decoding PEM data...
	I1009 20:05:54.978260  463914 main.go:141] libmachine: Parsing certificate...
	I1009 20:05:54.978317  463914 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem
	I1009 20:05:54.978339  463914 main.go:141] libmachine: Decoding PEM data...
	I1009 20:05:54.978357  463914 main.go:141] libmachine: Parsing certificate...
	I1009 20:05:54.978742  463914 cli_runner.go:164] Run: docker network inspect force-systemd-env-242564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 20:05:54.994762  463914 cli_runner.go:211] docker network inspect force-systemd-env-242564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 20:05:54.994843  463914 network_create.go:284] running [docker network inspect force-systemd-env-242564] to gather additional debugging logs...
	I1009 20:05:54.994885  463914 cli_runner.go:164] Run: docker network inspect force-systemd-env-242564
	W1009 20:05:55.033565  463914 cli_runner.go:211] docker network inspect force-systemd-env-242564 returned with exit code 1
	I1009 20:05:55.033605  463914 network_create.go:287] error running [docker network inspect force-systemd-env-242564]: docker network inspect force-systemd-env-242564: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-242564 not found
	I1009 20:05:55.033621  463914 network_create.go:289] output of [docker network inspect force-systemd-env-242564]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-242564 not found
	
	** /stderr **
	I1009 20:05:55.034228  463914 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:05:55.061682  463914 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3847a6577684 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:b5:e6:7d:c7:ad} reservation:<nil>}
	I1009 20:05:55.062090  463914 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5742e12e0dad IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9e:82:91:fd:a6:fb} reservation:<nil>}
	I1009 20:05:55.062348  463914 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-11b099636187 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:bb:e5:1b:6d:a2} reservation:<nil>}
	I1009 20:05:55.062803  463914 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cdf20}
	I1009 20:05:55.062828  463914 network_create.go:124] attempt to create docker network force-systemd-env-242564 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1009 20:05:55.062898  463914 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-242564 force-systemd-env-242564
	I1009 20:05:55.134320  463914 network_create.go:108] docker network force-systemd-env-242564 192.168.76.0/24 created
	I1009 20:05:55.134352  463914 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-242564" container
	I1009 20:05:55.134430  463914 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 20:05:55.152257  463914 cli_runner.go:164] Run: docker volume create force-systemd-env-242564 --label name.minikube.sigs.k8s.io=force-systemd-env-242564 --label created_by.minikube.sigs.k8s.io=true
	I1009 20:05:55.172281  463914 oci.go:103] Successfully created a docker volume force-systemd-env-242564
	I1009 20:05:55.172390  463914 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-242564-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-242564 --entrypoint /usr/bin/test -v force-systemd-env-242564:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 20:05:55.695090  463914 oci.go:107] Successfully prepared a docker volume force-systemd-env-242564
	I1009 20:05:55.695138  463914 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:05:55.695157  463914 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 20:05:55.695237  463914 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-242564:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 20:06:00.262527  463914 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-242564:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.567223793s)
	I1009 20:06:00.262563  463914 kic.go:203] duration metric: took 4.567401468s to extract preloaded images to volume ...
	W1009 20:06:00.262771  463914 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 20:06:00.262898  463914 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 20:06:00.482583  463914 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-242564 --name force-systemd-env-242564 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-242564 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-242564 --network force-systemd-env-242564 --ip 192.168.76.2 --volume force-systemd-env-242564:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 20:06:00.872122  463914 cli_runner.go:164] Run: docker container inspect force-systemd-env-242564 --format={{.State.Running}}
	I1009 20:06:00.895257  463914 cli_runner.go:164] Run: docker container inspect force-systemd-env-242564 --format={{.State.Status}}
	I1009 20:06:00.921286  463914 cli_runner.go:164] Run: docker exec force-systemd-env-242564 stat /var/lib/dpkg/alternatives/iptables
	I1009 20:06:00.974472  463914 oci.go:144] the created container "force-systemd-env-242564" has a running status.
	I1009 20:06:00.974525  463914 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-env-242564/id_rsa...
	I1009 20:06:01.076845  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-env-242564/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 20:06:01.076945  463914 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-env-242564/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 20:06:01.098657  463914 cli_runner.go:164] Run: docker container inspect force-systemd-env-242564 --format={{.State.Status}}
	I1009 20:06:01.120922  463914 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 20:06:01.120943  463914 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-242564 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 20:06:01.178939  463914 cli_runner.go:164] Run: docker container inspect force-systemd-env-242564 --format={{.State.Status}}
	I1009 20:06:01.201388  463914 machine.go:93] provisionDockerMachine start ...
	I1009 20:06:01.201502  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:01.230004  463914 main.go:141] libmachine: Using SSH client type: native
	I1009 20:06:01.230347  463914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33401 <nil> <nil>}
	I1009 20:06:01.230356  463914 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:06:01.231155  463914 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 20:06:04.381410  463914 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-242564
	
	I1009 20:06:04.381520  463914 ubuntu.go:182] provisioning hostname "force-systemd-env-242564"
	I1009 20:06:04.381595  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:04.401787  463914 main.go:141] libmachine: Using SSH client type: native
	I1009 20:06:04.402093  463914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33401 <nil> <nil>}
	I1009 20:06:04.402110  463914 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-242564 && echo "force-systemd-env-242564" | sudo tee /etc/hostname
	I1009 20:06:04.559896  463914 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-242564
	
	I1009 20:06:04.560020  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:04.579898  463914 main.go:141] libmachine: Using SSH client type: native
	I1009 20:06:04.580228  463914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33401 <nil> <nil>}
	I1009 20:06:04.580250  463914 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-242564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-242564/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-242564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:06:04.730002  463914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:06:04.730030  463914 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:06:04.730051  463914 ubuntu.go:190] setting up certificates
	I1009 20:06:04.730062  463914 provision.go:84] configureAuth start
	I1009 20:06:04.730147  463914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-242564
	I1009 20:06:04.748978  463914 provision.go:143] copyHostCerts
	I1009 20:06:04.749022  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:06:04.749056  463914 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:06:04.749069  463914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:06:04.749308  463914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:06:04.749402  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:06:04.749428  463914 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:06:04.749434  463914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:06:04.749468  463914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:06:04.749523  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:06:04.749550  463914 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:06:04.749560  463914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:06:04.749587  463914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:06:04.749648  463914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-242564 san=[127.0.0.1 192.168.76.2 force-systemd-env-242564 localhost minikube]
	I1009 20:06:05.190068  463914 provision.go:177] copyRemoteCerts
	I1009 20:06:05.190147  463914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:06:05.190193  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:05.209080  463914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33401 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-env-242564/id_rsa Username:docker}
	I1009 20:06:05.317618  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 20:06:05.317694  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:06:05.336728  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 20:06:05.336808  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1009 20:06:05.356734  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 20:06:05.356812  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:06:05.376260  463914 provision.go:87] duration metric: took 646.171998ms to configureAuth
	I1009 20:06:05.376337  463914 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:06:05.376562  463914 config.go:182] Loaded profile config "force-systemd-env-242564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:06:05.376693  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:05.400562  463914 main.go:141] libmachine: Using SSH client type: native
	I1009 20:06:05.400916  463914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33401 <nil> <nil>}
	I1009 20:06:05.400940  463914 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:06:05.674596  463914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:06:05.674625  463914 machine.go:96] duration metric: took 4.473216751s to provisionDockerMachine
	I1009 20:06:05.674644  463914 client.go:171] duration metric: took 10.696531805s to LocalClient.Create
	I1009 20:06:05.674669  463914 start.go:168] duration metric: took 10.696626542s to libmachine.API.Create "force-systemd-env-242564"
	I1009 20:06:05.674682  463914 start.go:294] postStartSetup for "force-systemd-env-242564" (driver="docker")
	I1009 20:06:05.674695  463914 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:06:05.674779  463914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:06:05.674857  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:05.693346  463914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33401 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-env-242564/id_rsa Username:docker}
	I1009 20:06:05.797946  463914 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:06:05.801790  463914 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:06:05.801822  463914 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:06:05.801835  463914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:06:05.801895  463914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:06:05.802001  463914 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:06:05.802014  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /etc/ssl/certs/2960022.pem
	I1009 20:06:05.802123  463914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:06:05.810702  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:06:05.831074  463914 start.go:297] duration metric: took 156.376213ms for postStartSetup
	I1009 20:06:05.831515  463914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-242564
	I1009 20:06:05.849403  463914 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/config.json ...
	I1009 20:06:05.849709  463914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:06:05.849762  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:05.867718  463914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33401 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-env-242564/id_rsa Username:docker}
	I1009 20:06:05.970503  463914 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:06:05.975635  463914 start.go:129] duration metric: took 11.001255597s to createHost
	I1009 20:06:05.975669  463914 start.go:84] releasing machines lock for "force-systemd-env-242564", held for 11.001395528s
	I1009 20:06:05.975748  463914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-242564
	I1009 20:06:05.993146  463914 ssh_runner.go:195] Run: cat /version.json
	I1009 20:06:05.993160  463914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:06:05.993201  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:05.993201  463914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-242564
	I1009 20:06:06.014732  463914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33401 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-env-242564/id_rsa Username:docker}
	I1009 20:06:06.015309  463914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33401 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/force-systemd-env-242564/id_rsa Username:docker}
	I1009 20:06:06.117421  463914 ssh_runner.go:195] Run: systemctl --version
	I1009 20:06:06.208163  463914 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:06:06.246019  463914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:06:06.250453  463914 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:06:06.250528  463914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:06:06.281368  463914 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 20:06:06.281435  463914 start.go:496] detecting cgroup driver to use...
	I1009 20:06:06.281467  463914 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1009 20:06:06.281545  463914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:06:06.300774  463914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:06:06.314523  463914 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:06:06.314593  463914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:06:06.332924  463914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:06:06.353792  463914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:06:06.480352  463914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:06:06.609436  463914 docker.go:234] disabling docker service ...
	I1009 20:06:06.609526  463914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:06:06.636198  463914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:06:06.650422  463914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:06:06.771294  463914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:06:06.896845  463914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:06:06.911573  463914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:06:06.931618  463914 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:06:06.931726  463914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:06:06.942007  463914 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 20:06:06.942137  463914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:06:06.952105  463914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:06:06.962001  463914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:06:06.972166  463914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:06:06.981181  463914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:06:06.991146  463914 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:06:07.013226  463914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:06:07.023977  463914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:06:07.032583  463914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:06:07.041083  463914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:06:07.163127  463914 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:06:07.285208  463914 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:06:07.285299  463914 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:06:07.289599  463914 start.go:564] Will wait 60s for crictl version
	I1009 20:06:07.289677  463914 ssh_runner.go:195] Run: which crictl
	I1009 20:06:07.293824  463914 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:06:07.318107  463914 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:06:07.318210  463914 ssh_runner.go:195] Run: crio --version
	I1009 20:06:07.346996  463914 ssh_runner.go:195] Run: crio --version
	I1009 20:06:07.378811  463914 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 20:06:07.381556  463914 cli_runner.go:164] Run: docker network inspect force-systemd-env-242564 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:06:07.397512  463914 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 20:06:07.401411  463914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:06:07.412022  463914 kubeadm.go:883] updating cluster {Name:force-systemd-env-242564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-242564 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:06:07.412134  463914 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:06:07.412196  463914 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:06:07.445552  463914 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:06:07.445579  463914 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:06:07.445637  463914 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:06:07.471831  463914 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:06:07.471854  463914 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:06:07.471862  463914 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 20:06:07.471949  463914 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-242564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-242564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:06:07.472047  463914 ssh_runner.go:195] Run: crio config
	I1009 20:06:07.526282  463914 cni.go:84] Creating CNI manager for ""
	I1009 20:06:07.526313  463914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:06:07.526328  463914 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 20:06:07.526352  463914 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-242564 NodeName:force-systemd-env-242564 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:06:07.526495  463914 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-242564"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:06:07.526573  463914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:06:07.534833  463914 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:06:07.534928  463914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:06:07.543008  463914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1009 20:06:07.556545  463914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:06:07.570233  463914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1009 20:06:07.583846  463914 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:06:07.587703  463914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:06:07.598072  463914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:06:07.716662  463914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:06:07.734436  463914 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564 for IP: 192.168.76.2
	I1009 20:06:07.734457  463914 certs.go:195] generating shared ca certs ...
	I1009 20:06:07.734475  463914 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:06:07.734623  463914 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:06:07.734672  463914 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:06:07.734679  463914 certs.go:257] generating profile certs ...
	I1009 20:06:07.734738  463914 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/client.key
	I1009 20:06:07.734759  463914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/client.crt with IP's: []
	I1009 20:06:08.024348  463914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/client.crt ...
	I1009 20:06:08.024383  463914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/client.crt: {Name:mkac7553ab0c16405ffc27546b65113c7f4ec0e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:06:08.024607  463914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/client.key ...
	I1009 20:06:08.024624  463914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/client.key: {Name:mk1cc6c9da7266a73cd13f6fad0728d53ee5d5fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:06:08.024731  463914 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.key.2ea16d24
	I1009 20:06:08.024760  463914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.crt.2ea16d24 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1009 20:06:08.229356  463914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.crt.2ea16d24 ...
	I1009 20:06:08.229382  463914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.crt.2ea16d24: {Name:mk4dd28c4e24130f604f555d5ab54edf3b3b56a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:06:08.229564  463914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.key.2ea16d24 ...
	I1009 20:06:08.229575  463914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.key.2ea16d24: {Name:mk64c7b57a8b25ea4f23f281fac9e031e57283d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:06:08.229648  463914 certs.go:382] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.crt.2ea16d24 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.crt
	I1009 20:06:08.229726  463914 certs.go:386] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.key.2ea16d24 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.key
	I1009 20:06:08.229782  463914 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.key
	I1009 20:06:08.229795  463914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.crt with IP's: []
	I1009 20:06:08.441569  463914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.crt ...
	I1009 20:06:08.441604  463914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.crt: {Name:mk1025018a7b705a66311ca022ea068a7e69f3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:06:08.441796  463914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.key ...
	I1009 20:06:08.441813  463914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.key: {Name:mkce3069b8670390eb918e89e064261c67730036 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:06:08.441914  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 20:06:08.441942  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 20:06:08.441959  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 20:06:08.441976  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 20:06:08.441989  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 20:06:08.442005  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 20:06:08.442016  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 20:06:08.442030  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 20:06:08.442083  463914 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:06:08.442134  463914 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:06:08.442150  463914 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:06:08.442176  463914 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:06:08.442205  463914 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:06:08.442231  463914 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:06:08.442278  463914 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:06:08.442315  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:06:08.442338  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem -> /usr/share/ca-certificates/296002.pem
	I1009 20:06:08.442349  463914 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /usr/share/ca-certificates/2960022.pem
	I1009 20:06:08.442901  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:06:08.464467  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:06:08.488580  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:06:08.510777  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:06:08.532531  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1009 20:06:08.552406  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:06:08.572012  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:06:08.591275  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/force-systemd-env-242564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:06:08.610873  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:06:08.629960  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:06:08.649783  463914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:06:08.669010  463914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:06:08.683224  463914 ssh_runner.go:195] Run: openssl version
	I1009 20:06:08.690103  463914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:06:08.699857  463914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:06:08.704320  463914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:06:08.704434  463914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:06:08.746184  463914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:06:08.755252  463914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:06:08.764547  463914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:06:08.768704  463914 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:06:08.768770  463914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:06:08.810565  463914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:06:08.821431  463914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:06:08.836543  463914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:06:08.843734  463914 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:06:08.843857  463914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:06:08.887169  463914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:06:08.896187  463914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:06:08.900104  463914 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 20:06:08.900157  463914 kubeadm.go:400] StartCluster: {Name:force-systemd-env-242564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-242564 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:06:08.900244  463914 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:06:08.900313  463914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:06:08.931899  463914 cri.go:89] found id: ""
	I1009 20:06:08.931994  463914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:06:08.940554  463914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:06:08.949025  463914 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 20:06:08.949191  463914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:06:08.957982  463914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:06:08.958004  463914 kubeadm.go:157] found existing configuration files:
	
	I1009 20:06:08.958062  463914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:06:08.966732  463914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:06:08.966895  463914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:06:08.975126  463914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:06:08.983527  463914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:06:08.983598  463914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:06:08.991881  463914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:06:09.008567  463914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:06:09.008717  463914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:06:09.018416  463914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:06:09.028196  463914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:06:09.028318  463914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:06:09.036787  463914 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 20:06:09.081803  463914 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 20:06:09.082052  463914 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 20:06:09.108448  463914 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 20:06:09.108663  463914 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 20:06:09.108752  463914 kubeadm.go:318] OS: Linux
	I1009 20:06:09.108843  463914 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 20:06:09.108913  463914 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 20:06:09.108969  463914 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 20:06:09.109024  463914 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 20:06:09.109079  463914 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 20:06:09.109160  463914 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 20:06:09.109214  463914 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 20:06:09.109265  463914 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 20:06:09.109318  463914 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 20:06:09.191238  463914 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:06:09.191445  463914 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:06:09.191598  463914 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:06:09.200453  463914 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:06:09.207533  463914 out.go:252]   - Generating certificates and keys ...
	I1009 20:06:09.207635  463914 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 20:06:09.207708  463914 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 20:06:10.135936  463914 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 20:06:10.778970  463914 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 20:06:10.932017  463914 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 20:06:11.387770  463914 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 20:06:12.023530  463914 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 20:06:12.023713  463914 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-242564 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 20:06:12.804124  463914 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 20:06:12.804365  463914 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-242564 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 20:06:13.304668  463914 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 20:06:13.505799  463914 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 20:06:13.952295  463914 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 20:06:13.952661  463914 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:06:14.498354  463914 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:06:15.390231  463914 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:06:15.651517  463914 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:06:16.659300  463914 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:06:16.719843  463914 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:06:16.721176  463914 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:06:16.732911  463914 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:06:16.736873  463914 out.go:252]   - Booting up control plane ...
	I1009 20:06:16.737014  463914 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:06:16.741667  463914 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:06:16.743302  463914 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:06:16.764014  463914 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:06:16.764130  463914 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 20:06:16.772206  463914 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 20:06:16.772572  463914 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:06:16.772821  463914 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 20:06:16.898589  463914 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:06:16.898723  463914 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:06:18.401472  463914 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500880876s
	I1009 20:06:18.403070  463914 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 20:06:18.403185  463914 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1009 20:06:18.403302  463914 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 20:06:18.403395  463914 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 20:10:18.404163  463914 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000912981s
	I1009 20:10:18.405430  463914 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001005692s
	I1009 20:10:18.405535  463914 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000755345s
	I1009 20:10:18.405548  463914 kubeadm.go:318] 
	I1009 20:10:18.405666  463914 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 20:10:18.405785  463914 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:10:18.405896  463914 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 20:10:18.405997  463914 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:10:18.406076  463914 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 20:10:18.406159  463914 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:10:18.406164  463914 kubeadm.go:318] 
	I1009 20:10:18.410515  463914 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 20:10:18.410810  463914 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 20:10:18.410965  463914 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:10:18.411653  463914 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 20:10:18.411733  463914 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 20:10:18.411897  463914 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-242564 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-242564 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500880876s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000912981s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001005692s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000755345s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-242564 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-242564 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500880876s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000912981s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001005692s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000755345s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 20:10:18.411988  463914 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:10:18.963132  463914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:10:18.977619  463914 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 20:10:18.977689  463914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:10:18.986534  463914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:10:18.986558  463914 kubeadm.go:157] found existing configuration files:
	
	I1009 20:10:18.986612  463914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:10:18.994753  463914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:10:18.994821  463914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:10:19.004493  463914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:10:19.013713  463914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:10:19.013838  463914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:10:19.021740  463914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:10:19.030152  463914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:10:19.030234  463914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:10:19.038956  463914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:10:19.046954  463914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:10:19.047020  463914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:10:19.054922  463914 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 20:10:19.096279  463914 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 20:10:19.096512  463914 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 20:10:19.120034  463914 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 20:10:19.120114  463914 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 20:10:19.120157  463914 kubeadm.go:318] OS: Linux
	I1009 20:10:19.120210  463914 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 20:10:19.120265  463914 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 20:10:19.120318  463914 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 20:10:19.120372  463914 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 20:10:19.120427  463914 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 20:10:19.120481  463914 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 20:10:19.120532  463914 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 20:10:19.120585  463914 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 20:10:19.120637  463914 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 20:10:19.190203  463914 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:10:19.190333  463914 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:10:19.190435  463914 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:10:19.201592  463914 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:10:19.209271  463914 out.go:252]   - Generating certificates and keys ...
	I1009 20:10:19.209384  463914 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 20:10:19.209465  463914 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 20:10:19.209563  463914 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:10:19.209650  463914 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:10:19.209737  463914 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:10:19.209808  463914 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 20:10:19.209887  463914 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:10:19.209964  463914 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:10:19.210054  463914 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:10:19.210144  463914 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:10:19.210201  463914 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 20:10:19.210277  463914 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:10:19.480077  463914 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:10:19.682912  463914 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:10:19.987845  463914 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:10:20.479142  463914 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:10:20.795749  463914 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:10:20.796414  463914 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:10:20.799040  463914 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:10:20.802581  463914 out.go:252]   - Booting up control plane ...
	I1009 20:10:20.802691  463914 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:10:20.802778  463914 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:10:20.802853  463914 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:10:20.818818  463914 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:10:20.818937  463914 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 20:10:20.826928  463914 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 20:10:20.827278  463914 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:10:20.827549  463914 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 20:10:20.976098  463914 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:10:20.976232  463914 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:10:22.475447  463914 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501689024s
	I1009 20:10:22.479240  463914 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 20:10:22.479350  463914 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1009 20:10:22.479453  463914 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 20:10:22.479544  463914 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 20:14:22.479681  463914 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000277452s
	I1009 20:14:22.482707  463914 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000645048s
	I1009 20:14:22.482932  463914 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.002894109s
	I1009 20:14:22.482941  463914 kubeadm.go:318] 
	I1009 20:14:22.483110  463914 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 20:14:22.483552  463914 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:14:22.483758  463914 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 20:14:22.483948  463914 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:14:22.484192  463914 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 20:14:22.484855  463914 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:14:22.484879  463914 kubeadm.go:318] 
	I1009 20:14:22.491220  463914 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 20:14:22.491535  463914 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 20:14:22.491702  463914 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:14:22.492339  463914 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 20:14:22.492492  463914 kubeadm.go:402] duration metric: took 8m13.592338343s to StartCluster
	I1009 20:14:22.492509  463914 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:14:22.492552  463914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:14:22.492630  463914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:14:22.518449  463914 cri.go:89] found id: ""
	I1009 20:14:22.518479  463914 logs.go:282] 0 containers: []
	W1009 20:14:22.518541  463914 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:14:22.518548  463914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:14:22.518664  463914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:14:22.548895  463914 cri.go:89] found id: ""
	I1009 20:14:22.548919  463914 logs.go:282] 0 containers: []
	W1009 20:14:22.549025  463914 logs.go:284] No container was found matching "etcd"
	I1009 20:14:22.549035  463914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:14:22.549169  463914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:14:22.579187  463914 cri.go:89] found id: ""
	I1009 20:14:22.579208  463914 logs.go:282] 0 containers: []
	W1009 20:14:22.579217  463914 logs.go:284] No container was found matching "coredns"
	I1009 20:14:22.579223  463914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:14:22.579281  463914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:14:22.604969  463914 cri.go:89] found id: ""
	I1009 20:14:22.604991  463914 logs.go:282] 0 containers: []
	W1009 20:14:22.604999  463914 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:14:22.605006  463914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:14:22.605101  463914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:14:22.635281  463914 cri.go:89] found id: ""
	I1009 20:14:22.635302  463914 logs.go:282] 0 containers: []
	W1009 20:14:22.635311  463914 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:14:22.635317  463914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:14:22.635377  463914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:14:22.663838  463914 cri.go:89] found id: ""
	I1009 20:14:22.663859  463914 logs.go:282] 0 containers: []
	W1009 20:14:22.663868  463914 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:14:22.663875  463914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:14:22.663938  463914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:14:22.693821  463914 cri.go:89] found id: ""
	I1009 20:14:22.693844  463914 logs.go:282] 0 containers: []
	W1009 20:14:22.693854  463914 logs.go:284] No container was found matching "kindnet"
	I1009 20:14:22.693864  463914 logs.go:123] Gathering logs for kubelet ...
	I1009 20:14:22.693875  463914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:14:22.782568  463914 logs.go:123] Gathering logs for dmesg ...
	I1009 20:14:22.782603  463914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:14:22.800054  463914 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:14:22.800091  463914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:14:22.883687  463914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 20:14:22.875403    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:22.876140    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:22.877700    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:22.878200    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:22.879688    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 20:14:22.875403    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:22.876140    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:22.877700    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:22.878200    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:22.879688    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:14:22.883710  463914 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:14:22.883723  463914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:14:22.962758  463914 logs.go:123] Gathering logs for container status ...
	I1009 20:14:22.962792  463914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 20:14:22.995289  463914 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501689024s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000277452s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000645048s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.002894109s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 20:14:22.995354  463914 out.go:285] * 
	* 
	W1009 20:14:22.995414  463914 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501689024s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000277452s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000645048s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.002894109s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501689024s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000277452s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000645048s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.002894109s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:14:22.995434  463914 out.go:285] * 
	* 
	W1009 20:14:22.997819  463914 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:14:23.006879  463914 out.go:203] 
	W1009 20:14:23.010105  463914 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501689024s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000277452s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000645048s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.002894109s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501689024s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000277452s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000645048s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.002894109s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:14:23.010140  463914 out.go:285] * 
	* 
	I1009 20:14:23.013481  463914 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-242564 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-10-09 20:14:23.075597614 +0000 UTC m=+4466.680395376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-env-242564
helpers_test.go:243: (dbg) docker inspect force-systemd-env-242564:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "81aa040560628ef6d0fbd85c672dde99229b68e1accd68804e9ae92ed612a7be",
	        "Created": "2025-10-09T20:06:00.514267353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 464315,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:06:00.61089444Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/81aa040560628ef6d0fbd85c672dde99229b68e1accd68804e9ae92ed612a7be/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/81aa040560628ef6d0fbd85c672dde99229b68e1accd68804e9ae92ed612a7be/hostname",
	        "HostsPath": "/var/lib/docker/containers/81aa040560628ef6d0fbd85c672dde99229b68e1accd68804e9ae92ed612a7be/hosts",
	        "LogPath": "/var/lib/docker/containers/81aa040560628ef6d0fbd85c672dde99229b68e1accd68804e9ae92ed612a7be/81aa040560628ef6d0fbd85c672dde99229b68e1accd68804e9ae92ed612a7be-json.log",
	        "Name": "/force-systemd-env-242564",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-242564:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-242564",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "81aa040560628ef6d0fbd85c672dde99229b68e1accd68804e9ae92ed612a7be",
	                "LowerDir": "/var/lib/docker/overlay2/c02ca9f74809975a52d493183cf59306de5e7c055003110142f9d5ca2ce69ac4-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c02ca9f74809975a52d493183cf59306de5e7c055003110142f9d5ca2ce69ac4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c02ca9f74809975a52d493183cf59306de5e7c055003110142f9d5ca2ce69ac4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c02ca9f74809975a52d493183cf59306de5e7c055003110142f9d5ca2ce69ac4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-242564",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-242564/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-242564",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-242564",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-242564",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "69a1482a09162e2065823b1e20cb52670ccb58bd13cb56e7416283512c7a4efb",
	            "SandboxKey": "/var/run/docker/netns/69a1482a0916",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33401"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33402"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-242564": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:6e:db:e1:ce:c2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bf4feebb802d6b1c2a5326afd07ef49664b51330d9dae35114354827a6a9f537",
	                    "EndpointID": "dfae90f6aee31aaca58ba8649a91bb44457819bb298d856d6ccafd11b5276769",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-242564",
	                        "81aa04056062"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-242564 -n force-systemd-env-242564
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-242564 -n force-systemd-env-242564: exit status 6 (347.796443ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:14:23.437029  471015 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-242564" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-242564 logs -n 25
helpers_test.go:260: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-535911 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl status docker --all --full --no-pager                                      │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl cat docker --no-pager                                                      │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo cat /etc/docker/daemon.json                                                          │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo docker system info                                                                   │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo cri-dockerd --version                                                                │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl cat containerd --no-pager                                                  │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo cat /etc/containerd/config.toml                                                      │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo containerd config dump                                                               │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl status crio --all --full --no-pager                                        │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl cat crio --no-pager                                                        │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo crio config                                                                          │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ delete  │ -p cilium-535911                                                                                           │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │ 09 Oct 25 20:05 UTC │
	│ start   │ -p force-systemd-env-242564 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-242564  │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ force-systemd-flag-736218 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-736218 │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	│ delete  │ -p force-systemd-flag-736218                                                                               │ force-systemd-flag-736218 │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	│ start   │ -p cert-expiration-282540 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio     │ cert-expiration-282540    │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:12:20
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:12:20.523635  468261 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:12:20.523771  468261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:12:20.523775  468261 out.go:374] Setting ErrFile to fd 2...
	I1009 20:12:20.523779  468261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:12:20.524044  468261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:12:20.524514  468261 out.go:368] Setting JSON to false
	I1009 20:12:20.525525  468261 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10480,"bootTime":1760030261,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:12:20.525585  468261 start.go:143] virtualization:  
	I1009 20:12:20.531884  468261 out.go:179] * [cert-expiration-282540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:12:20.535451  468261 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:12:20.535527  468261 notify.go:221] Checking for updates...
	I1009 20:12:20.541943  468261 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:12:20.545146  468261 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:12:20.548415  468261 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:12:20.551567  468261 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:12:20.554702  468261 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:12:20.558392  468261 config.go:182] Loaded profile config "force-systemd-env-242564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:12:20.558495  468261 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:12:20.580061  468261 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:12:20.580186  468261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:12:20.639478  468261 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:12:20.629856568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:12:20.639584  468261 docker.go:319] overlay module found
	I1009 20:12:20.644678  468261 out.go:179] * Using the docker driver based on user configuration
	I1009 20:12:20.647666  468261 start.go:309] selected driver: docker
	I1009 20:12:20.647678  468261 start.go:930] validating driver "docker" against <nil>
	I1009 20:12:20.647691  468261 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:12:20.648423  468261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:12:20.711141  468261 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:12:20.702072381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:12:20.711291  468261 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 20:12:20.711512  468261 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 20:12:20.714484  468261 out.go:179] * Using Docker driver with root privileges
	I1009 20:12:20.717431  468261 cni.go:84] Creating CNI manager for ""
	I1009 20:12:20.717505  468261 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:12:20.717512  468261 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 20:12:20.717603  468261 start.go:353] cluster config:
	{Name:cert-expiration-282540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-282540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:12:20.720835  468261 out.go:179] * Starting "cert-expiration-282540" primary control-plane node in "cert-expiration-282540" cluster
	I1009 20:12:20.723854  468261 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:12:20.726953  468261 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:12:20.729801  468261 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:12:20.729856  468261 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 20:12:20.729865  468261 cache.go:58] Caching tarball of preloaded images
	I1009 20:12:20.729959  468261 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 20:12:20.729969  468261 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 20:12:20.730085  468261 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/config.json ...
	I1009 20:12:20.730102  468261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/config.json: {Name:mk1bd60eafd05150523391da63e5b899550961c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:12:20.730258  468261 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:12:20.750389  468261 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:12:20.750402  468261 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:12:20.750414  468261 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:12:20.750437  468261 start.go:361] acquireMachinesLock for cert-expiration-282540: {Name:mkd76b403b9edaf009300c03e23c17d4eceafb7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:12:20.750535  468261 start.go:365] duration metric: took 86.065µs to acquireMachinesLock for "cert-expiration-282540"
	I1009 20:12:20.750559  468261 start.go:94] Provisioning new machine with config: &{Name:cert-expiration-282540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-282540 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:12:20.750620  468261 start.go:126] createHost starting for "" (driver="docker")
	I1009 20:12:20.754119  468261 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 20:12:20.754364  468261 start.go:160] libmachine.API.Create for "cert-expiration-282540" (driver="docker")
	I1009 20:12:20.754409  468261 client.go:168] LocalClient.Create starting
	I1009 20:12:20.754497  468261 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem
	I1009 20:12:20.754533  468261 main.go:141] libmachine: Decoding PEM data...
	I1009 20:12:20.754552  468261 main.go:141] libmachine: Parsing certificate...
	I1009 20:12:20.754605  468261 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem
	I1009 20:12:20.754621  468261 main.go:141] libmachine: Decoding PEM data...
	I1009 20:12:20.754629  468261 main.go:141] libmachine: Parsing certificate...
	I1009 20:12:20.755005  468261 cli_runner.go:164] Run: docker network inspect cert-expiration-282540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 20:12:20.774090  468261 cli_runner.go:211] docker network inspect cert-expiration-282540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 20:12:20.774159  468261 network_create.go:284] running [docker network inspect cert-expiration-282540] to gather additional debugging logs...
	I1009 20:12:20.774176  468261 cli_runner.go:164] Run: docker network inspect cert-expiration-282540
	W1009 20:12:20.789616  468261 cli_runner.go:211] docker network inspect cert-expiration-282540 returned with exit code 1
	I1009 20:12:20.789640  468261 network_create.go:287] error running [docker network inspect cert-expiration-282540]: docker network inspect cert-expiration-282540: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-282540 not found
	I1009 20:12:20.789652  468261 network_create.go:289] output of [docker network inspect cert-expiration-282540]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-282540 not found
	
	** /stderr **
	I1009 20:12:20.789768  468261 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:12:20.806192  468261 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3847a6577684 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:b5:e6:7d:c7:ad} reservation:<nil>}
	I1009 20:12:20.806576  468261 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5742e12e0dad IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9e:82:91:fd:a6:fb} reservation:<nil>}
	I1009 20:12:20.806821  468261 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-11b099636187 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:bb:e5:1b:6d:a2} reservation:<nil>}
	I1009 20:12:20.807086  468261 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bf4feebb802d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:be:17:46:7f:8c:9e} reservation:<nil>}
	I1009 20:12:20.807505  468261 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a70440}
	I1009 20:12:20.807520  468261 network_create.go:124] attempt to create docker network cert-expiration-282540 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1009 20:12:20.807580  468261 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-282540 cert-expiration-282540
	I1009 20:12:20.879128  468261 network_create.go:108] docker network cert-expiration-282540 192.168.85.0/24 created
	I1009 20:12:20.879153  468261 kic.go:121] calculated static IP "192.168.85.2" for the "cert-expiration-282540" container
	I1009 20:12:20.879241  468261 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 20:12:20.899635  468261 cli_runner.go:164] Run: docker volume create cert-expiration-282540 --label name.minikube.sigs.k8s.io=cert-expiration-282540 --label created_by.minikube.sigs.k8s.io=true
	I1009 20:12:20.918059  468261 oci.go:103] Successfully created a docker volume cert-expiration-282540
	I1009 20:12:20.918130  468261 cli_runner.go:164] Run: docker run --rm --name cert-expiration-282540-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-282540 --entrypoint /usr/bin/test -v cert-expiration-282540:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 20:12:21.453947  468261 oci.go:107] Successfully prepared a docker volume cert-expiration-282540
	I1009 20:12:21.454009  468261 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:12:21.454029  468261 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 20:12:21.454107  468261 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-282540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 20:12:25.927731  468261 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-282540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.473573802s)
	I1009 20:12:25.927753  468261 kic.go:203] duration metric: took 4.473719659s to extract preloaded images to volume ...
	W1009 20:12:25.927913  468261 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 20:12:25.928019  468261 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 20:12:25.982962  468261 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-282540 --name cert-expiration-282540 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-282540 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-282540 --network cert-expiration-282540 --ip 192.168.85.2 --volume cert-expiration-282540:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 20:12:26.290871  468261 cli_runner.go:164] Run: docker container inspect cert-expiration-282540 --format={{.State.Running}}
	I1009 20:12:26.312933  468261 cli_runner.go:164] Run: docker container inspect cert-expiration-282540 --format={{.State.Status}}
	I1009 20:12:26.341274  468261 cli_runner.go:164] Run: docker exec cert-expiration-282540 stat /var/lib/dpkg/alternatives/iptables
	I1009 20:12:26.394350  468261 oci.go:144] the created container "cert-expiration-282540" has a running status.
	I1009 20:12:26.394376  468261 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/cert-expiration-282540/id_rsa...
	I1009 20:12:27.825546  468261 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-294150/.minikube/machines/cert-expiration-282540/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 20:12:27.863111  468261 cli_runner.go:164] Run: docker container inspect cert-expiration-282540 --format={{.State.Status}}
	I1009 20:12:27.896223  468261 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 20:12:27.896234  468261 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-282540 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 20:12:27.952607  468261 cli_runner.go:164] Run: docker container inspect cert-expiration-282540 --format={{.State.Status}}
	I1009 20:12:27.977017  468261 machine.go:93] provisionDockerMachine start ...
	I1009 20:12:27.977135  468261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-282540
	I1009 20:12:28.007438  468261 main.go:141] libmachine: Using SSH client type: native
	I1009 20:12:28.007780  468261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33406 <nil> <nil>}
	I1009 20:12:28.007788  468261 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:12:28.196779  468261 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-282540
	
	I1009 20:12:28.196793  468261 ubuntu.go:182] provisioning hostname "cert-expiration-282540"
	I1009 20:12:28.196874  468261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-282540
	I1009 20:12:28.215950  468261 main.go:141] libmachine: Using SSH client type: native
	I1009 20:12:28.216252  468261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33406 <nil> <nil>}
	I1009 20:12:28.216261  468261 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-282540 && echo "cert-expiration-282540" | sudo tee /etc/hostname
	I1009 20:12:28.384840  468261 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-282540
	
	I1009 20:12:28.384909  468261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-282540
	I1009 20:12:28.409265  468261 main.go:141] libmachine: Using SSH client type: native
	I1009 20:12:28.409659  468261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33406 <nil> <nil>}
	I1009 20:12:28.409686  468261 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-282540' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-282540/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-282540' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:12:28.557639  468261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:12:28.557656  468261 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:12:28.557673  468261 ubuntu.go:190] setting up certificates
	I1009 20:12:28.557680  468261 provision.go:84] configureAuth start
	I1009 20:12:28.557742  468261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-282540
	I1009 20:12:28.575388  468261 provision.go:143] copyHostCerts
	I1009 20:12:28.575447  468261 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:12:28.575455  468261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:12:28.575537  468261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:12:28.575623  468261 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:12:28.575627  468261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:12:28.575652  468261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:12:28.575699  468261 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:12:28.575702  468261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:12:28.575723  468261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:12:28.575788  468261 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-282540 san=[127.0.0.1 192.168.85.2 cert-expiration-282540 localhost minikube]
	I1009 20:12:29.033687  468261 provision.go:177] copyRemoteCerts
	I1009 20:12:29.033750  468261 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:12:29.033788  468261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-282540
	I1009 20:12:29.050509  468261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33406 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/cert-expiration-282540/id_rsa Username:docker}
	I1009 20:12:29.153050  468261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:12:29.171112  468261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 20:12:29.190437  468261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:12:29.208706  468261 provision.go:87] duration metric: took 651.012299ms to configureAuth
	I1009 20:12:29.208722  468261 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:12:29.208903  468261 config.go:182] Loaded profile config "cert-expiration-282540": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:12:29.209002  468261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-282540
	I1009 20:12:29.226091  468261 main.go:141] libmachine: Using SSH client type: native
	I1009 20:12:29.226401  468261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33406 <nil> <nil>}
	I1009 20:12:29.226414  468261 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:12:29.476471  468261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:12:29.476485  468261 machine.go:96] duration metric: took 1.499453764s to provisionDockerMachine
	I1009 20:12:29.476493  468261 client.go:171] duration metric: took 8.722079234s to LocalClient.Create
	I1009 20:12:29.476513  468261 start.go:168] duration metric: took 8.722150858s to libmachine.API.Create "cert-expiration-282540"
	I1009 20:12:29.476520  468261 start.go:294] postStartSetup for "cert-expiration-282540" (driver="docker")
	I1009 20:12:29.476529  468261 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:12:29.476594  468261 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:12:29.476638  468261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-282540
	I1009 20:12:29.496301  468261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33406 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/cert-expiration-282540/id_rsa Username:docker}
	I1009 20:12:29.597384  468261 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:12:29.600828  468261 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:12:29.600847  468261 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:12:29.600857  468261 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:12:29.600912  468261 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:12:29.600992  468261 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:12:29.601088  468261 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:12:29.608666  468261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:12:29.626643  468261 start.go:297] duration metric: took 150.108834ms for postStartSetup
	I1009 20:12:29.627013  468261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-282540
	I1009 20:12:29.644778  468261 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/config.json ...
	I1009 20:12:29.645054  468261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:12:29.645091  468261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-282540
	I1009 20:12:29.661429  468261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33406 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/cert-expiration-282540/id_rsa Username:docker}
	I1009 20:12:29.762116  468261 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:12:29.767065  468261 start.go:129] duration metric: took 9.016431979s to createHost
	I1009 20:12:29.767080  468261 start.go:84] releasing machines lock for "cert-expiration-282540", held for 9.016538074s
	I1009 20:12:29.767152  468261 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-282540
	I1009 20:12:29.784103  468261 ssh_runner.go:195] Run: cat /version.json
	I1009 20:12:29.784146  468261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-282540
	I1009 20:12:29.784164  468261 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:12:29.784220  468261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-282540
	I1009 20:12:29.806462  468261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33406 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/cert-expiration-282540/id_rsa Username:docker}
	I1009 20:12:29.813217  468261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33406 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/cert-expiration-282540/id_rsa Username:docker}
	I1009 20:12:29.999555  468261 ssh_runner.go:195] Run: systemctl --version
	I1009 20:12:30.008243  468261 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:12:30.088746  468261 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:12:30.097226  468261 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:12:30.097294  468261 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:12:30.135149  468261 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 20:12:30.135165  468261 start.go:496] detecting cgroup driver to use...
	I1009 20:12:30.135211  468261 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 20:12:30.135290  468261 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:12:30.154785  468261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:12:30.169201  468261 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:12:30.169270  468261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:12:30.188160  468261 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:12:30.208110  468261 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:12:30.328557  468261 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:12:30.478647  468261 docker.go:234] disabling docker service ...
	I1009 20:12:30.478718  468261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:12:30.500829  468261 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:12:30.514441  468261 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:12:30.635297  468261 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:12:30.762360  468261 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:12:30.775704  468261 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:12:30.791773  468261 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:12:30.791831  468261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:12:30.801882  468261 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:12:30.801946  468261 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:12:30.811332  468261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:12:30.819913  468261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:12:30.828854  468261 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:12:30.838363  468261 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:12:30.848140  468261 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:12:30.862649  468261 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:12:30.872213  468261 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:12:30.879984  468261 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:12:30.887541  468261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:12:31.001438  468261 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:12:31.126754  468261 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:12:31.126847  468261 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:12:31.131281  468261 start.go:564] Will wait 60s for crictl version
	I1009 20:12:31.131343  468261 ssh_runner.go:195] Run: which crictl
	I1009 20:12:31.135287  468261 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:12:31.162394  468261 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:12:31.162475  468261 ssh_runner.go:195] Run: crio --version
	I1009 20:12:31.191171  468261 ssh_runner.go:195] Run: crio --version
	I1009 20:12:31.224400  468261 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 20:12:31.227085  468261 cli_runner.go:164] Run: docker network inspect cert-expiration-282540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:12:31.243252  468261 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 20:12:31.247143  468261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:12:31.257318  468261 kubeadm.go:883] updating cluster {Name:cert-expiration-282540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-282540 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:12:31.257415  468261 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:12:31.257479  468261 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:12:31.293335  468261 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:12:31.293345  468261 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:12:31.293399  468261 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:12:31.319467  468261 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:12:31.319479  468261 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:12:31.319486  468261 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1009 20:12:31.319570  468261 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-282540 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-282540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:12:31.319647  468261 ssh_runner.go:195] Run: crio config
	I1009 20:12:31.383946  468261 cni.go:84] Creating CNI manager for ""
	I1009 20:12:31.383958  468261 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:12:31.383974  468261 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 20:12:31.384005  468261 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-282540 NodeName:cert-expiration-282540 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:12:31.384164  468261 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-282540"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:12:31.384239  468261 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:12:31.400746  468261 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:12:31.400815  468261 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:12:31.413738  468261 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1009 20:12:31.429357  468261 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:12:31.443623  468261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1009 20:12:31.457221  468261 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:12:31.460918  468261 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:12:31.471508  468261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:12:31.590163  468261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:12:31.607072  468261 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540 for IP: 192.168.85.2
	I1009 20:12:31.607083  468261 certs.go:195] generating shared ca certs ...
	I1009 20:12:31.607098  468261 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:12:31.607241  468261 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:12:31.607285  468261 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:12:31.607291  468261 certs.go:257] generating profile certs ...
	I1009 20:12:31.607343  468261 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/client.key
	I1009 20:12:31.607353  468261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/client.crt with IP's: []
	I1009 20:12:32.704184  468261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/client.crt ...
	I1009 20:12:32.704200  468261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/client.crt: {Name:mk3509be56524bed7965d5f426408d5289c500c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:12:32.704420  468261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/client.key ...
	I1009 20:12:32.704429  468261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/client.key: {Name:mk677f66469155dcaabc0645d58db0c10eb47760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:12:32.704519  468261 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/apiserver.key.73f7f519
	I1009 20:12:32.704532  468261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/apiserver.crt.73f7f519 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1009 20:12:32.816520  468261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/apiserver.crt.73f7f519 ...
	I1009 20:12:32.816540  468261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/apiserver.crt.73f7f519: {Name:mk099a0a2e31f73622628621ffc988663329ed0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:12:32.816737  468261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/apiserver.key.73f7f519 ...
	I1009 20:12:32.816746  468261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/apiserver.key.73f7f519: {Name:mk889693e4a01688844171548ce12d265656ef8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:12:32.816829  468261 certs.go:382] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/apiserver.crt.73f7f519 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/apiserver.crt
	I1009 20:12:32.816912  468261 certs.go:386] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/apiserver.key.73f7f519 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/apiserver.key
	I1009 20:12:32.816971  468261 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/proxy-client.key
	I1009 20:12:32.816984  468261 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/proxy-client.crt with IP's: []
	I1009 20:12:33.080957  468261 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/proxy-client.crt ...
	I1009 20:12:33.080973  468261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/proxy-client.crt: {Name:mke26fd6b633ae29da1d7b00781ebfcc39943fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:12:33.081201  468261 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/proxy-client.key ...
	I1009 20:12:33.081210  468261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/proxy-client.key: {Name:mkf5ca8d87422ae14dc86678f457fa3cef0d860c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:12:33.081468  468261 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:12:33.081508  468261 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:12:33.081515  468261 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:12:33.081541  468261 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:12:33.081564  468261 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:12:33.081585  468261 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:12:33.081624  468261 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:12:33.082191  468261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:12:33.101007  468261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:12:33.119746  468261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:12:33.138786  468261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:12:33.157661  468261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 20:12:33.176306  468261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:12:33.194284  468261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:12:33.211862  468261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/cert-expiration-282540/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:12:33.232817  468261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:12:33.257386  468261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:12:33.279004  468261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:12:33.298159  468261 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:12:33.311258  468261 ssh_runner.go:195] Run: openssl version
	I1009 20:12:33.317451  468261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:12:33.325997  468261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:12:33.330046  468261 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:12:33.330102  468261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:12:33.371633  468261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:12:33.380413  468261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:12:33.389266  468261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:12:33.393583  468261 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:12:33.393673  468261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:12:33.434957  468261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:12:33.443598  468261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:12:33.452399  468261 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:12:33.456262  468261 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:12:33.456317  468261 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:12:33.502477  468261 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:12:33.511217  468261 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:12:33.514970  468261 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 20:12:33.515024  468261 kubeadm.go:400] StartCluster: {Name:cert-expiration-282540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-282540 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:12:33.515088  468261 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:12:33.515148  468261 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:12:33.544668  468261 cri.go:89] found id: ""
	I1009 20:12:33.544755  468261 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:12:33.552781  468261 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:12:33.561014  468261 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 20:12:33.561073  468261 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:12:33.569643  468261 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:12:33.569655  468261 kubeadm.go:157] found existing configuration files:
	
	I1009 20:12:33.569707  468261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:12:33.577898  468261 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:12:33.577972  468261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:12:33.585980  468261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:12:33.594446  468261 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:12:33.594514  468261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:12:33.602444  468261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:12:33.610410  468261 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:12:33.610467  468261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:12:33.618664  468261 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:12:33.626605  468261 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:12:33.626660  468261 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:12:33.634312  468261 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 20:12:33.674837  468261 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 20:12:33.674888  468261 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 20:12:33.699260  468261 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 20:12:33.699328  468261 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 20:12:33.699363  468261 kubeadm.go:318] OS: Linux
	I1009 20:12:33.699409  468261 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 20:12:33.699458  468261 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 20:12:33.699506  468261 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 20:12:33.699555  468261 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 20:12:33.699604  468261 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 20:12:33.699652  468261 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 20:12:33.699698  468261 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 20:12:33.699747  468261 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 20:12:33.699793  468261 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 20:12:33.768000  468261 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:12:33.768116  468261 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:12:33.768209  468261 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:12:33.781505  468261 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:12:33.788152  468261 out.go:252]   - Generating certificates and keys ...
	I1009 20:12:33.788258  468261 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 20:12:33.788337  468261 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 20:12:33.905991  468261 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 20:12:34.034856  468261 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 20:12:34.090003  468261 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 20:12:34.298332  468261 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 20:12:34.680342  468261 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 20:12:34.680491  468261 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-282540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 20:12:35.396775  468261 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 20:12:35.396931  468261 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-282540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 20:12:35.809176  468261 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 20:12:36.216230  468261 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 20:12:36.487661  468261 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 20:12:36.487766  468261 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:12:36.838164  468261 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:12:37.288179  468261 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:12:37.645507  468261 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:12:38.415684  468261 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:12:38.719713  468261 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:12:38.720505  468261 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:12:38.723575  468261 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:12:38.727351  468261 out.go:252]   - Booting up control plane ...
	I1009 20:12:38.727459  468261 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:12:38.727539  468261 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:12:38.727609  468261 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:12:38.743206  468261 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:12:38.743312  468261 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 20:12:38.754086  468261 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 20:12:38.754388  468261 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:12:38.754449  468261 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 20:12:38.891792  468261 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:12:38.891912  468261 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:12:40.392977  468261 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501290314s
	I1009 20:12:40.398906  468261 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 20:12:40.399011  468261 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1009 20:12:40.399145  468261 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 20:12:40.399235  468261 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 20:12:42.978861  468261 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.579684402s
	I1009 20:12:45.202466  468261 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.803557337s
	I1009 20:12:46.901725  468261 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502716358s
	I1009 20:12:46.920762  468261 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 20:12:46.940241  468261 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 20:12:46.954516  468261 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 20:12:46.954721  468261 kubeadm.go:318] [mark-control-plane] Marking the node cert-expiration-282540 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 20:12:46.970233  468261 kubeadm.go:318] [bootstrap-token] Using token: fv1w94.euhfudpgvylb0rw2
	I1009 20:12:46.973221  468261 out.go:252]   - Configuring RBAC rules ...
	I1009 20:12:46.973347  468261 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 20:12:46.977514  468261 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 20:12:46.986911  468261 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 20:12:46.991269  468261 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 20:12:46.997568  468261 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 20:12:47.004526  468261 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 20:12:47.312278  468261 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 20:12:47.765262  468261 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 20:12:48.308926  468261 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 20:12:48.310277  468261 kubeadm.go:318] 
	I1009 20:12:48.310347  468261 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 20:12:48.310351  468261 kubeadm.go:318] 
	I1009 20:12:48.310430  468261 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 20:12:48.310434  468261 kubeadm.go:318] 
	I1009 20:12:48.310459  468261 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 20:12:48.310552  468261 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 20:12:48.310616  468261 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 20:12:48.310620  468261 kubeadm.go:318] 
	I1009 20:12:48.310686  468261 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 20:12:48.310690  468261 kubeadm.go:318] 
	I1009 20:12:48.310744  468261 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 20:12:48.310748  468261 kubeadm.go:318] 
	I1009 20:12:48.310804  468261 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 20:12:48.310896  468261 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 20:12:48.310976  468261 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 20:12:48.310980  468261 kubeadm.go:318] 
	I1009 20:12:48.311072  468261 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 20:12:48.311159  468261 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 20:12:48.311163  468261 kubeadm.go:318] 
	I1009 20:12:48.311256  468261 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token fv1w94.euhfudpgvylb0rw2 \
	I1009 20:12:48.311372  468261 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e766d16640f098061f552dd476e80ebd3809bd57b4957045222f32c55d34903e \
	I1009 20:12:48.311393  468261 kubeadm.go:318] 	--control-plane 
	I1009 20:12:48.311396  468261 kubeadm.go:318] 
	I1009 20:12:48.311484  468261 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 20:12:48.311488  468261 kubeadm.go:318] 
	I1009 20:12:48.311584  468261 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token fv1w94.euhfudpgvylb0rw2 \
	I1009 20:12:48.311695  468261 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e766d16640f098061f552dd476e80ebd3809bd57b4957045222f32c55d34903e 
	I1009 20:12:48.315162  468261 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 20:12:48.315407  468261 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 20:12:48.315514  468261 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:12:48.315529  468261 cni.go:84] Creating CNI manager for ""
	I1009 20:12:48.315536  468261 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:12:48.318780  468261 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1009 20:12:48.321758  468261 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 20:12:48.327897  468261 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 20:12:48.327913  468261 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 20:12:48.344584  468261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 20:12:48.639520  468261 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:12:48.639650  468261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:12:48.639735  468261 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-282540 minikube.k8s.io/updated_at=2025_10_09T20_12_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb minikube.k8s.io/name=cert-expiration-282540 minikube.k8s.io/primary=true
	I1009 20:12:48.656654  468261 ops.go:34] apiserver oom_adj: -16
	I1009 20:12:48.790434  468261 kubeadm.go:1113] duration metric: took 150.834959ms to wait for elevateKubeSystemPrivileges
	I1009 20:12:48.790454  468261 kubeadm.go:402] duration metric: took 15.275434384s to StartCluster
	I1009 20:12:48.790469  468261 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:12:48.790536  468261 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:12:48.791201  468261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:12:48.791429  468261 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:12:48.791512  468261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 20:12:48.791759  468261 config.go:182] Loaded profile config "cert-expiration-282540": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:12:48.791803  468261 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:12:48.791861  468261 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-282540"
	I1009 20:12:48.791875  468261 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-282540"
	I1009 20:12:48.791895  468261 host.go:66] Checking if "cert-expiration-282540" exists ...
	I1009 20:12:48.792413  468261 cli_runner.go:164] Run: docker container inspect cert-expiration-282540 --format={{.State.Status}}
	I1009 20:12:48.793172  468261 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-282540"
	I1009 20:12:48.793190  468261 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-282540"
	I1009 20:12:48.793505  468261 cli_runner.go:164] Run: docker container inspect cert-expiration-282540 --format={{.State.Status}}
	I1009 20:12:48.795267  468261 out.go:179] * Verifying Kubernetes components...
	I1009 20:12:48.800233  468261 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:12:48.833154  468261 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:12:48.836272  468261 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-282540"
	I1009 20:12:48.836300  468261 host.go:66] Checking if "cert-expiration-282540" exists ...
	I1009 20:12:48.836718  468261 cli_runner.go:164] Run: docker container inspect cert-expiration-282540 --format={{.State.Status}}
	I1009 20:12:48.836941  468261 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:12:48.836949  468261 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:12:48.836997  468261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-282540
	I1009 20:12:48.878552  468261 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:12:48.878564  468261 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:12:48.878628  468261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-282540
	I1009 20:12:48.880572  468261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33406 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/cert-expiration-282540/id_rsa Username:docker}
	I1009 20:12:48.910986  468261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33406 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/cert-expiration-282540/id_rsa Username:docker}
	I1009 20:12:49.139905  468261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:12:49.149332  468261 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 20:12:49.152013  468261 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:12:49.158150  468261 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:12:49.761135  468261 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1009 20:12:49.763048  468261 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:12:49.763096  468261 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:12:49.793436  468261 api_server.go:72] duration metric: took 1.001979494s to wait for apiserver process to appear ...
	I1009 20:12:49.793450  468261 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:12:49.793470  468261 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 20:12:49.808861  468261 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 20:12:49.810436  468261 api_server.go:141] control plane version: v1.34.1
	I1009 20:12:49.810453  468261 api_server.go:131] duration metric: took 16.997294ms to wait for apiserver health ...
	I1009 20:12:49.810482  468261 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:12:49.814474  468261 system_pods.go:59] 5 kube-system pods found
	I1009 20:12:49.814496  468261 system_pods.go:61] "etcd-cert-expiration-282540" [bb66928a-ff25-4006-b4f3-8b6c4c43ff88] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:12:49.814504  468261 system_pods.go:61] "kube-apiserver-cert-expiration-282540" [3eb0d9f3-c6de-4e99-ba38-558a1b2e5e11] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:12:49.814511  468261 system_pods.go:61] "kube-controller-manager-cert-expiration-282540" [658ffea1-2f36-43b4-bef1-37fb9d7277ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:12:49.814517  468261 system_pods.go:61] "kube-scheduler-cert-expiration-282540" [2fca9e64-0736-46c3-8d94-9db3fd33ae9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:12:49.814522  468261 system_pods.go:61] "storage-provisioner" [42fb39e2-8653-4ace-83c9-1a552a075634] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1009 20:12:49.814526  468261 system_pods.go:74] duration metric: took 4.039956ms to wait for pod list to return data ...
	I1009 20:12:49.814537  468261 kubeadm.go:586] duration metric: took 1.023087259s to wait for: map[apiserver:true system_pods:true]
	I1009 20:12:49.814549  468261 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:12:49.815639  468261 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1009 20:12:49.817423  468261 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:12:49.817440  468261 node_conditions.go:123] node cpu capacity is 2
	I1009 20:12:49.817451  468261 node_conditions.go:105] duration metric: took 2.897831ms to run NodePressure ...
	I1009 20:12:49.817462  468261 start.go:242] waiting for startup goroutines ...
	I1009 20:12:49.818579  468261 addons.go:514] duration metric: took 1.026776184s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1009 20:12:50.264538  468261 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-282540" context rescaled to 1 replicas
	I1009 20:12:50.264571  468261 start.go:247] waiting for cluster config update ...
	I1009 20:12:50.264583  468261 start.go:256] writing updated cluster config ...
	I1009 20:12:50.264895  468261 ssh_runner.go:195] Run: rm -f paused
	I1009 20:12:50.322475  468261 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:12:50.325803  468261 out.go:179] * Done! kubectl is now configured to use "cert-expiration-282540" cluster and "default" namespace by default
	I1009 20:14:22.479681  463914 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000277452s
	I1009 20:14:22.482707  463914 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000645048s
	I1009 20:14:22.482932  463914 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.002894109s
	I1009 20:14:22.482941  463914 kubeadm.go:318] 
	I1009 20:14:22.483110  463914 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 20:14:22.483552  463914 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:14:22.483758  463914 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 20:14:22.483948  463914 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:14:22.484192  463914 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 20:14:22.484855  463914 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:14:22.484879  463914 kubeadm.go:318] 
	I1009 20:14:22.491220  463914 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 20:14:22.491535  463914 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 20:14:22.491702  463914 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:14:22.492339  463914 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 20:14:22.492492  463914 kubeadm.go:402] duration metric: took 8m13.592338343s to StartCluster
	I1009 20:14:22.492509  463914 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:14:22.492552  463914 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:14:22.492630  463914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:14:22.518449  463914 cri.go:89] found id: ""
	I1009 20:14:22.518479  463914 logs.go:282] 0 containers: []
	W1009 20:14:22.518541  463914 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:14:22.518548  463914 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:14:22.518664  463914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:14:22.548895  463914 cri.go:89] found id: ""
	I1009 20:14:22.548919  463914 logs.go:282] 0 containers: []
	W1009 20:14:22.549025  463914 logs.go:284] No container was found matching "etcd"
	I1009 20:14:22.549035  463914 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:14:22.549169  463914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:14:22.579187  463914 cri.go:89] found id: ""
	I1009 20:14:22.579208  463914 logs.go:282] 0 containers: []
	W1009 20:14:22.579217  463914 logs.go:284] No container was found matching "coredns"
	I1009 20:14:22.579223  463914 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:14:22.579281  463914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:14:22.604969  463914 cri.go:89] found id: ""
	I1009 20:14:22.604991  463914 logs.go:282] 0 containers: []
	W1009 20:14:22.604999  463914 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:14:22.605006  463914 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:14:22.605101  463914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:14:22.635281  463914 cri.go:89] found id: ""
	I1009 20:14:22.635302  463914 logs.go:282] 0 containers: []
	W1009 20:14:22.635311  463914 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:14:22.635317  463914 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:14:22.635377  463914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:14:22.663838  463914 cri.go:89] found id: ""
	I1009 20:14:22.663859  463914 logs.go:282] 0 containers: []
	W1009 20:14:22.663868  463914 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:14:22.663875  463914 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:14:22.663938  463914 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:14:22.693821  463914 cri.go:89] found id: ""
	I1009 20:14:22.693844  463914 logs.go:282] 0 containers: []
	W1009 20:14:22.693854  463914 logs.go:284] No container was found matching "kindnet"
	I1009 20:14:22.693864  463914 logs.go:123] Gathering logs for kubelet ...
	I1009 20:14:22.693875  463914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:14:22.782568  463914 logs.go:123] Gathering logs for dmesg ...
	I1009 20:14:22.782603  463914 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:14:22.800054  463914 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:14:22.800091  463914 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:14:22.883687  463914 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 20:14:22.875403    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:22.876140    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:22.877700    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:22.878200    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:22.879688    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 20:14:22.875403    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:22.876140    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:22.877700    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:22.878200    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:22.879688    2362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:14:22.883710  463914 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:14:22.883723  463914 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:14:22.962758  463914 logs.go:123] Gathering logs for container status ...
	I1009 20:14:22.962792  463914 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 20:14:22.995289  463914 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501689024s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000277452s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000645048s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.002894109s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 20:14:22.995354  463914 out.go:285] * 
	W1009 20:14:22.995414  463914 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501689024s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000277452s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000645048s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.002894109s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:14:22.995434  463914 out.go:285] * 
	W1009 20:14:22.997819  463914 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:14:23.006879  463914 out.go:203] 
	W1009 20:14:23.010105  463914 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501689024s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000277452s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000645048s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.002894109s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:14:23.010140  463914 out.go:285] * 
	I1009 20:14:23.013481  463914 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 20:14:12 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:12.386302028Z" level=info msg="createCtr: removing container 925aa67d3fedd5a58307bc907c6aa7e61544504df2060604ad56dad7d7930737" id=dcfba211-bd55-4c39-a1e9-29d836f1807c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:14:12 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:12.386339641Z" level=info msg="createCtr: deleting container 925aa67d3fedd5a58307bc907c6aa7e61544504df2060604ad56dad7d7930737 from storage" id=dcfba211-bd55-4c39-a1e9-29d836f1807c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:14:12 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:12.3893652Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-force-systemd-env-242564_kube-system_308a80072c9963a1da74042eb4b80985_0" id=dcfba211-bd55-4c39-a1e9-29d836f1807c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:14:15 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:15.366262089Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=6ff8935f-0270-49c5-b0a1-b4a562f96818 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:14:15 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:15.367174104Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=a4b924cc-7d42-4392-a337-07554342bc59 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:14:15 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:15.368185599Z" level=info msg="Creating container: kube-system/kube-apiserver-force-systemd-env-242564/kube-apiserver" id=9f646e0f-5c28-4d09-a0fe-300b92210935 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:14:15 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:15.368432635Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:14:15 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:15.373161016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:14:15 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:15.373805022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:14:15 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:15.384649465Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9f646e0f-5c28-4d09-a0fe-300b92210935 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:14:15 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:15.385851298Z" level=info msg="createCtr: deleting container ID c1855628afa5830ad00de91f2b03f818b913a26f430bfe5c58b5cbbf61474e2f from idIndex" id=9f646e0f-5c28-4d09-a0fe-300b92210935 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:14:15 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:15.385893859Z" level=info msg="createCtr: removing container c1855628afa5830ad00de91f2b03f818b913a26f430bfe5c58b5cbbf61474e2f" id=9f646e0f-5c28-4d09-a0fe-300b92210935 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:14:15 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:15.38593107Z" level=info msg="createCtr: deleting container c1855628afa5830ad00de91f2b03f818b913a26f430bfe5c58b5cbbf61474e2f from storage" id=9f646e0f-5c28-4d09-a0fe-300b92210935 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:14:15 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:15.388610194Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-force-systemd-env-242564_kube-system_6570018256a81063ff7e10a053ddcaa1_0" id=9f646e0f-5c28-4d09-a0fe-300b92210935 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:14:23 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:23.367165881Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=684aae01-44ca-4738-8538-77acf7674657 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:14:23 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:23.368375041Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=9ee7b88e-3741-46e4-897f-cd658d8bc3be name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:14:23 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:23.372679185Z" level=info msg="Creating container: kube-system/kube-controller-manager-force-systemd-env-242564/kube-controller-manager" id=05cd7191-39a3-4fe8-88b5-25929da20148 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:14:23 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:23.37295425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:14:23 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:23.390367143Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:14:23 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:23.39114309Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:14:23 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:23.414303851Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=05cd7191-39a3-4fe8-88b5-25929da20148 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:14:23 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:23.415863138Z" level=info msg="createCtr: deleting container ID 103577e63fc16b26a1bc88b00da7fa477854afa659363fb39366eef1b7c8a5ba from idIndex" id=05cd7191-39a3-4fe8-88b5-25929da20148 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:14:23 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:23.41590368Z" level=info msg="createCtr: removing container 103577e63fc16b26a1bc88b00da7fa477854afa659363fb39366eef1b7c8a5ba" id=05cd7191-39a3-4fe8-88b5-25929da20148 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:14:23 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:23.41594327Z" level=info msg="createCtr: deleting container 103577e63fc16b26a1bc88b00da7fa477854afa659363fb39366eef1b7c8a5ba from storage" id=05cd7191-39a3-4fe8-88b5-25929da20148 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:14:23 force-systemd-env-242564 crio[838]: time="2025-10-09T20:14:23.418616676Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-env-242564_kube-system_8d2fb027502fb2ef41e1f317d04ff230_0" id=05cd7191-39a3-4fe8-88b5-25929da20148 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 20:14:24.108475    2478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:24.109073    2478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:24.110712    2478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:24.111265    2478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 20:14:24.112768    2478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +4.492991] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:38] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:40] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:45] overlayfs: idmapped layers are currently not supported
	[ +36.012100] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:47] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:50] overlayfs: idmapped layers are currently not supported
	[ +27.967875] overlayfs: idmapped layers are currently not supported
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:04] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:12] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:14:24 up  2:56,  0 user,  load average: 0.43, 0.82, 1.38
	Linux force-systemd-env-242564 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 20:14:12 force-systemd-env-242564 kubelet[1778]: E1009 20:14:12.389830    1778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-force-systemd-env-242564" podUID="308a80072c9963a1da74042eb4b80985"
	Oct 09 20:14:12 force-systemd-env-242564 kubelet[1778]: E1009 20:14:12.432901    1778 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-env-242564\" not found"
	Oct 09 20:14:15 force-systemd-env-242564 kubelet[1778]: E1009 20:14:15.365792    1778 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-242564\" not found" node="force-systemd-env-242564"
	Oct 09 20:14:15 force-systemd-env-242564 kubelet[1778]: E1009 20:14:15.388940    1778 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 20:14:15 force-systemd-env-242564 kubelet[1778]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 20:14:15 force-systemd-env-242564 kubelet[1778]:  > podSandboxID="3908a185f145bb8961783f4e27f054662289c335e524bf721422bc9abf0016c6"
	Oct 09 20:14:15 force-systemd-env-242564 kubelet[1778]: E1009 20:14:15.389091    1778 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 20:14:15 force-systemd-env-242564 kubelet[1778]:         container kube-apiserver start failed in pod kube-apiserver-force-systemd-env-242564_kube-system(6570018256a81063ff7e10a053ddcaa1): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 20:14:15 force-systemd-env-242564 kubelet[1778]:  > logger="UnhandledError"
	Oct 09 20:14:15 force-systemd-env-242564 kubelet[1778]: E1009 20:14:15.389243    1778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-force-systemd-env-242564" podUID="6570018256a81063ff7e10a053ddcaa1"
	Oct 09 20:14:19 force-systemd-env-242564 kubelet[1778]: E1009 20:14:19.010640    1778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-env-242564?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 09 20:14:19 force-systemd-env-242564 kubelet[1778]: I1009 20:14:19.194886    1778 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-env-242564"
	Oct 09 20:14:19 force-systemd-env-242564 kubelet[1778]: E1009 20:14:19.195293    1778 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.76.2:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="force-systemd-env-242564"
	Oct 09 20:14:19 force-systemd-env-242564 kubelet[1778]: E1009 20:14:19.262442    1778 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.76.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 09 20:14:19 force-systemd-env-242564 kubelet[1778]: E1009 20:14:19.578152    1778 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.76.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-env-242564.186ceb9ef89399f3  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-env-242564,UID:force-systemd-env-242564,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-env-242564 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-env-242564,},FirstTimestamp:2025-10-09 20:10:22.402804211 +0000 UTC m=+1.424954019,LastTimestamp:2025-10-09 20:10:22.402804211 +0000 UTC m=+1.424954019,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet
,ReportingInstance:force-systemd-env-242564,}"
	Oct 09 20:14:21 force-systemd-env-242564 kubelet[1778]: E1009 20:14:21.948208    1778 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 09 20:14:22 force-systemd-env-242564 kubelet[1778]: E1009 20:14:22.433965    1778 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-env-242564\" not found"
	Oct 09 20:14:23 force-systemd-env-242564 kubelet[1778]: E1009 20:14:23.366278    1778 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-242564\" not found" node="force-systemd-env-242564"
	Oct 09 20:14:23 force-systemd-env-242564 kubelet[1778]: E1009 20:14:23.424442    1778 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 20:14:23 force-systemd-env-242564 kubelet[1778]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 20:14:23 force-systemd-env-242564 kubelet[1778]:  > podSandboxID="29f4c83a50b52b1af0609cdfc263b57adec8c47e341c966fc3e6ee28ba0410cd"
	Oct 09 20:14:23 force-systemd-env-242564 kubelet[1778]: E1009 20:14:23.424538    1778 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 20:14:23 force-systemd-env-242564 kubelet[1778]:         container kube-controller-manager start failed in pod kube-controller-manager-force-systemd-env-242564_kube-system(8d2fb027502fb2ef41e1f317d04ff230): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 20:14:23 force-systemd-env-242564 kubelet[1778]:  > logger="UnhandledError"
	Oct 09 20:14:23 force-systemd-env-242564 kubelet[1778]: E1009 20:14:23.424574    1778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-force-systemd-env-242564" podUID="8d2fb027502fb2ef41e1f317d04ff230"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-242564 -n force-systemd-env-242564
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-242564 -n force-systemd-env-242564: exit status 6 (362.432064ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:14:24.583975  471228 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-242564" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-env-242564" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-242564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-242564
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-242564: (1.945237234s)
--- FAIL: TestForceSystemdEnv (511.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-326957 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-326957 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-zwl9v" [23d5a564-ebfc-4fd5-959e-08300e0e452f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-326957 -n functional-326957
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-09 19:21:33.81401166 +0000 UTC m=+1297.418809422
functional_test.go:1645: (dbg) Run:  kubectl --context functional-326957 describe po hello-node-connect-7d85dfc575-zwl9v -n default
functional_test.go:1645: (dbg) kubectl --context functional-326957 describe po hello-node-connect-7d85dfc575-zwl9v -n default:
Name:             hello-node-connect-7d85dfc575-zwl9v
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-326957/192.168.49.2
Start Time:       Thu, 09 Oct 2025 19:11:33 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tglk4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tglk4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-zwl9v to functional-326957
Normal   Pulling    7m1s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m1s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m1s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m58s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m58s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-326957 logs hello-node-connect-7d85dfc575-zwl9v -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-326957 logs hello-node-connect-7d85dfc575-zwl9v -n default: exit status 1 (113.039388ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-zwl9v" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-326957 logs hello-node-connect-7d85dfc575-zwl9v -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-326957 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-zwl9v
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-326957/192.168.49.2
Start Time:       Thu, 09 Oct 2025 19:11:33 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tglk4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tglk4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-zwl9v to functional-326957
Normal   Pulling    7m2s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m2s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m59s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m59s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-326957 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-326957 logs -l app=hello-node-connect: exit status 1 (92.235778ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-zwl9v" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-326957 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-326957 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.169.39
IPs:                      10.96.169.39
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31292/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-326957
helpers_test.go:243: (dbg) docker inspect functional-326957:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "adf02523c0057c2a3a364385f75e388c341f21b904915c0fbaa9633309924ecd",
	        "Created": "2025-10-09T19:08:31.492970184Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 311727,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:08:31.554372509Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/adf02523c0057c2a3a364385f75e388c341f21b904915c0fbaa9633309924ecd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/adf02523c0057c2a3a364385f75e388c341f21b904915c0fbaa9633309924ecd/hostname",
	        "HostsPath": "/var/lib/docker/containers/adf02523c0057c2a3a364385f75e388c341f21b904915c0fbaa9633309924ecd/hosts",
	        "LogPath": "/var/lib/docker/containers/adf02523c0057c2a3a364385f75e388c341f21b904915c0fbaa9633309924ecd/adf02523c0057c2a3a364385f75e388c341f21b904915c0fbaa9633309924ecd-json.log",
	        "Name": "/functional-326957",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-326957:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-326957",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "adf02523c0057c2a3a364385f75e388c341f21b904915c0fbaa9633309924ecd",
	                "LowerDir": "/var/lib/docker/overlay2/70ec80a7534255217cf05ebaac3ff4d08bcd0c4683aebb8756265f515748ab4d-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/70ec80a7534255217cf05ebaac3ff4d08bcd0c4683aebb8756265f515748ab4d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/70ec80a7534255217cf05ebaac3ff4d08bcd0c4683aebb8756265f515748ab4d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/70ec80a7534255217cf05ebaac3ff4d08bcd0c4683aebb8756265f515748ab4d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-326957",
	                "Source": "/var/lib/docker/volumes/functional-326957/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-326957",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-326957",
	                "name.minikube.sigs.k8s.io": "functional-326957",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "22d893d8e2417365e76f7efe648fe9e1110a3556a6a9c3cf593ab838ed6b9fdd",
	            "SandboxKey": "/var/run/docker/netns/22d893d8e241",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-326957": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:3f:53:9f:51:b6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7349c9800541bd3ba5833c3402c15fab410917421aece0ba5ed3a1f3b7b4a393",
	                    "EndpointID": "f851a31cba7b0ef0fb6788c5412349dcd78683de5eb3df9729973ce3d2b90d25",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-326957",
	                        "adf02523c005"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-326957 -n functional-326957
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-326957 logs -n 25: (1.522409119s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-326957 image load --daemon kicbase/echo-server:functional-326957 --alsologtostderr                                                             │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-326957 ssh sudo cat /usr/share/ca-certificates/2960022.pem                                                                                     │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-326957 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ image   │ functional-326957 image ls                                                                                                                                │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-326957 ssh sudo cat /etc/test/nested/copy/296002/hosts                                                                                         │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ image   │ functional-326957 image load --daemon kicbase/echo-server:functional-326957 --alsologtostderr                                                             │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cp      │ functional-326957 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                        │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-326957 ssh -n functional-326957 sudo cat /home/docker/cp-test.txt                                                                              │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ image   │ functional-326957 image ls                                                                                                                                │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cp      │ functional-326957 cp functional-326957:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1678562008/001/cp-test.txt                                │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ image   │ functional-326957 image save kicbase/echo-server:functional-326957 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-326957 ssh -n functional-326957 sudo cat /home/docker/cp-test.txt                                                                              │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ image   │ functional-326957 image rm kicbase/echo-server:functional-326957 --alsologtostderr                                                                        │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ cp      │ functional-326957 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                 │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ image   │ functional-326957 image ls                                                                                                                                │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-326957 ssh -n functional-326957 sudo cat /tmp/does/not/exist/cp-test.txt                                                                       │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ image   │ functional-326957 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-326957 ssh echo hello                                                                                                                          │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ image   │ functional-326957 image save --daemon kicbase/echo-server:functional-326957 --alsologtostderr                                                             │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ ssh     │ functional-326957 ssh cat /etc/hostname                                                                                                                   │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ tunnel  │ functional-326957 tunnel --alsologtostderr                                                                                                                │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │                     │
	│ tunnel  │ functional-326957 tunnel --alsologtostderr                                                                                                                │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │                     │
	│ tunnel  │ functional-326957 tunnel --alsologtostderr                                                                                                                │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │                     │
	│ addons  │ functional-326957 addons list                                                                                                                             │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	│ addons  │ functional-326957 addons list -o json                                                                                                                     │ functional-326957 │ jenkins │ v1.37.0 │ 09 Oct 25 19:11 UTC │ 09 Oct 25 19:11 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:10:26
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:10:26.539949  315894 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:10:26.540065  315894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:10:26.540069  315894 out.go:374] Setting ErrFile to fd 2...
	I1009 19:10:26.540073  315894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:10:26.540343  315894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:10:26.540750  315894 out.go:368] Setting JSON to false
	I1009 19:10:26.541746  315894 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6766,"bootTime":1760030261,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 19:10:26.541805  315894 start.go:143] virtualization:  
	I1009 19:10:26.545499  315894 out.go:179] * [functional-326957] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:10:26.548391  315894 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:10:26.548459  315894 notify.go:221] Checking for updates...
	I1009 19:10:26.554298  315894 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:10:26.557315  315894 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:10:26.560315  315894 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 19:10:26.563296  315894 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:10:26.566120  315894 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:10:26.569572  315894 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:10:26.569669  315894 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:10:26.604427  315894 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:10:26.604594  315894 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:10:26.667021  315894 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-09 19:10:26.656797615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:10:26.667113  315894 docker.go:319] overlay module found
	I1009 19:10:26.670256  315894 out.go:179] * Using the docker driver based on existing profile
	I1009 19:10:26.673166  315894 start.go:309] selected driver: docker
	I1009 19:10:26.673177  315894 start.go:930] validating driver "docker" against &{Name:functional-326957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-326957 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:10:26.673259  315894 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:10:26.673372  315894 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:10:26.729966  315894 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-09 19:10:26.72007686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:10:26.730433  315894 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:10:26.730458  315894 cni.go:84] Creating CNI manager for ""
	I1009 19:10:26.730519  315894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:10:26.730578  315894 start.go:353] cluster config:
	{Name:functional-326957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-326957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:10:26.733675  315894 out.go:179] * Starting "functional-326957" primary control-plane node in "functional-326957" cluster
	I1009 19:10:26.736393  315894 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:10:26.739328  315894 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:10:26.742209  315894 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:10:26.742277  315894 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:10:26.742322  315894 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:10:26.742329  315894 cache.go:58] Caching tarball of preloaded images
	I1009 19:10:26.742426  315894 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:10:26.742435  315894 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:10:26.742552  315894 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/config.json ...
	I1009 19:10:26.769468  315894 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:10:26.769481  315894 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:10:26.769493  315894 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:10:26.769515  315894 start.go:361] acquireMachinesLock for functional-326957: {Name:mk07f46654a2f42a2e6162754462eb283d01eb2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:10:26.769577  315894 start.go:365] duration metric: took 45.964µs to acquireMachinesLock for "functional-326957"
	I1009 19:10:26.769598  315894 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:10:26.769603  315894 fix.go:55] fixHost starting: 
	I1009 19:10:26.769861  315894 cli_runner.go:164] Run: docker container inspect functional-326957 --format={{.State.Status}}
	I1009 19:10:26.787468  315894 fix.go:113] recreateIfNeeded on functional-326957: state=Running err=<nil>
	W1009 19:10:26.787490  315894 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:10:26.790751  315894 out.go:252] * Updating the running docker "functional-326957" container ...
	I1009 19:10:26.790785  315894 machine.go:93] provisionDockerMachine start ...
	I1009 19:10:26.790869  315894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326957
	I1009 19:10:26.809053  315894 main.go:141] libmachine: Using SSH client type: native
	I1009 19:10:26.809420  315894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1009 19:10:26.809427  315894 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:10:26.956801  315894 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-326957
	
	I1009 19:10:26.956823  315894 ubuntu.go:182] provisioning hostname "functional-326957"
	I1009 19:10:26.956884  315894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326957
	I1009 19:10:26.975113  315894 main.go:141] libmachine: Using SSH client type: native
	I1009 19:10:26.975413  315894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1009 19:10:26.975423  315894 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-326957 && echo "functional-326957" | sudo tee /etc/hostname
	I1009 19:10:27.147631  315894 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-326957
	
	I1009 19:10:27.147697  315894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326957
	I1009 19:10:27.166773  315894 main.go:141] libmachine: Using SSH client type: native
	I1009 19:10:27.167083  315894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1009 19:10:27.167098  315894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-326957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-326957/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-326957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:10:27.317673  315894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:10:27.317690  315894 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 19:10:27.317716  315894 ubuntu.go:190] setting up certificates
	I1009 19:10:27.317726  315894 provision.go:84] configureAuth start
	I1009 19:10:27.317797  315894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-326957
	I1009 19:10:27.335732  315894 provision.go:143] copyHostCerts
	I1009 19:10:27.335789  315894 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 19:10:27.335807  315894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:10:27.335883  315894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 19:10:27.335999  315894 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 19:10:27.336004  315894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:10:27.336030  315894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 19:10:27.336093  315894 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 19:10:27.336098  315894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:10:27.336122  315894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 19:10:27.336176  315894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.functional-326957 san=[127.0.0.1 192.168.49.2 functional-326957 localhost minikube]
	I1009 19:10:28.385517  315894 provision.go:177] copyRemoteCerts
	I1009 19:10:28.385574  315894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:10:28.385621  315894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326957
	I1009 19:10:28.410121  315894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/functional-326957/id_rsa Username:docker}
	I1009 19:10:28.513697  315894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:10:28.532551  315894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 19:10:28.551071  315894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:10:28.569907  315894 provision.go:87] duration metric: took 1.252158017s to configureAuth
	I1009 19:10:28.569925  315894 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:10:28.570126  315894 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:10:28.570222  315894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326957
	I1009 19:10:28.588043  315894 main.go:141] libmachine: Using SSH client type: native
	I1009 19:10:28.588365  315894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1009 19:10:28.588376  315894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:10:33.959437  315894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:10:33.959451  315894 machine.go:96] duration metric: took 7.16865936s to provisionDockerMachine
	I1009 19:10:33.959460  315894 start.go:294] postStartSetup for "functional-326957" (driver="docker")
	I1009 19:10:33.959470  315894 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:10:33.959529  315894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:10:33.959564  315894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326957
	I1009 19:10:33.976875  315894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/functional-326957/id_rsa Username:docker}
	I1009 19:10:34.089515  315894 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:10:34.093080  315894 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:10:34.093099  315894 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:10:34.093133  315894 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 19:10:34.093194  315894 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 19:10:34.093280  315894 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 19:10:34.093356  315894 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/test/nested/copy/296002/hosts -> hosts in /etc/test/nested/copy/296002
	I1009 19:10:34.093405  315894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/296002
	I1009 19:10:34.101442  315894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:10:34.119598  315894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/test/nested/copy/296002/hosts --> /etc/test/nested/copy/296002/hosts (40 bytes)
	I1009 19:10:34.136951  315894 start.go:297] duration metric: took 177.4759ms for postStartSetup
	I1009 19:10:34.137034  315894 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:10:34.137082  315894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326957
	I1009 19:10:34.159395  315894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/functional-326957/id_rsa Username:docker}
	I1009 19:10:34.258335  315894 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:10:34.263119  315894 fix.go:57] duration metric: took 7.493509057s for fixHost
	I1009 19:10:34.263135  315894 start.go:84] releasing machines lock for "functional-326957", held for 7.493549934s
	I1009 19:10:34.263201  315894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-326957
	I1009 19:10:34.280785  315894 ssh_runner.go:195] Run: cat /version.json
	I1009 19:10:34.280822  315894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:10:34.280835  315894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326957
	I1009 19:10:34.280875  315894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326957
	I1009 19:10:34.301658  315894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/functional-326957/id_rsa Username:docker}
	I1009 19:10:34.306837  315894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/functional-326957/id_rsa Username:docker}
	I1009 19:10:34.487348  315894 ssh_runner.go:195] Run: systemctl --version
	I1009 19:10:34.494276  315894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:10:34.536544  315894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:10:34.541279  315894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:10:34.541340  315894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:10:34.549695  315894 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:10:34.549709  315894 start.go:496] detecting cgroup driver to use...
	I1009 19:10:34.549740  315894 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:10:34.549791  315894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:10:34.566015  315894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:10:34.579559  315894 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:10:34.579621  315894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:10:34.595597  315894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:10:34.609822  315894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:10:34.747914  315894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:10:34.888237  315894 docker.go:234] disabling docker service ...
	I1009 19:10:34.888317  315894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:10:34.904740  315894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:10:34.919049  315894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:10:35.055999  315894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:10:35.207494  315894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:10:35.223950  315894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:10:35.242688  315894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:10:35.242776  315894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:10:35.253426  315894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:10:35.253488  315894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:10:35.263651  315894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:10:35.273983  315894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:10:35.283510  315894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:10:35.292546  315894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:10:35.302166  315894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:10:35.310805  315894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:10:35.319912  315894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:10:35.327506  315894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:10:35.334943  315894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:10:35.483188  315894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:10:41.076433  315894 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.593219818s)
	I1009 19:10:41.076453  315894 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:10:41.076510  315894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:10:41.080624  315894 start.go:564] Will wait 60s for crictl version
	I1009 19:10:41.080679  315894 ssh_runner.go:195] Run: which crictl
	I1009 19:10:41.084360  315894 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:10:41.116350  315894 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:10:41.116434  315894 ssh_runner.go:195] Run: crio --version
	I1009 19:10:41.144867  315894 ssh_runner.go:195] Run: crio --version
	I1009 19:10:41.177526  315894 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:10:41.180376  315894 cli_runner.go:164] Run: docker network inspect functional-326957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:10:41.196996  315894 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:10:41.204783  315894 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1009 19:10:41.207570  315894 kubeadm.go:883] updating cluster {Name:functional-326957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-326957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:10:41.207700  315894 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:10:41.207786  315894 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:10:41.242166  315894 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:10:41.242177  315894 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:10:41.242233  315894 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:10:41.273377  315894 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:10:41.273394  315894 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:10:41.273400  315894 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 19:10:41.273492  315894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-326957 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-326957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:10:41.273583  315894 ssh_runner.go:195] Run: crio config
	I1009 19:10:41.336964  315894 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1009 19:10:41.336993  315894 cni.go:84] Creating CNI manager for ""
	I1009 19:10:41.337002  315894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:10:41.337016  315894 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:10:41.337038  315894 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-326957 NodeName:functional-326957 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:10:41.337209  315894 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-326957"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:10:41.337290  315894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:10:41.345530  315894 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:10:41.345603  315894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:10:41.353560  315894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 19:10:41.367929  315894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:10:41.381333  315894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1009 19:10:41.396531  315894 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:10:41.404361  315894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:10:41.544563  315894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:10:41.559289  315894 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957 for IP: 192.168.49.2
	I1009 19:10:41.559300  315894 certs.go:195] generating shared ca certs ...
	I1009 19:10:41.559316  315894 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:10:41.559485  315894 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 19:10:41.559540  315894 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 19:10:41.559547  315894 certs.go:257] generating profile certs ...
	I1009 19:10:41.559671  315894 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.key
	I1009 19:10:41.559739  315894 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/apiserver.key.9367ecc4
	I1009 19:10:41.559779  315894 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/proxy-client.key
	I1009 19:10:41.559902  315894 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 19:10:41.559957  315894 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 19:10:41.559964  315894 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:10:41.559989  315894 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:10:41.560015  315894 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:10:41.560047  315894 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 19:10:41.560089  315894 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:10:41.560835  315894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:10:41.580795  315894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:10:41.600676  315894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:10:41.618764  315894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:10:41.636748  315894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:10:41.655468  315894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:10:41.675153  315894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:10:41.693509  315894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:10:41.712007  315894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:10:41.729841  315894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 19:10:41.748525  315894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 19:10:41.765916  315894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:10:41.779119  315894 ssh_runner.go:195] Run: openssl version
	I1009 19:10:41.785368  315894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:10:41.794215  315894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:10:41.798197  315894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:10:41.798271  315894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:10:41.839725  315894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:10:41.848259  315894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 19:10:41.856932  315894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 19:10:41.860943  315894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 19:10:41.860998  315894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 19:10:41.913907  315894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 19:10:41.924145  315894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 19:10:41.933581  315894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 19:10:41.937679  315894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 19:10:41.937737  315894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 19:10:41.979061  315894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:10:41.987192  315894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:10:41.991289  315894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:10:42.034385  315894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:10:42.078183  315894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:10:42.122514  315894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:10:42.166747  315894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:10:42.212366  315894 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:10:42.255320  315894 kubeadm.go:400] StartCluster: {Name:functional-326957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-326957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:10:42.255410  315894 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:10:42.255490  315894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:10:42.287611  315894 cri.go:89] found id: "1680d537a2bdf67ab1c9321ae281547a2e31cc37e708821613506227586b1f08"
	I1009 19:10:42.287624  315894 cri.go:89] found id: "b34076edffb2a372648789819ed99d771448ffc3d873aad69b250efd36604e80"
	I1009 19:10:42.287627  315894 cri.go:89] found id: "e3e083ec257ee110b9d09a9cb3508745336c2e629e8957c5fb0d653f53c806d9"
	I1009 19:10:42.287629  315894 cri.go:89] found id: "154d6982b95787f8573a7e3157af7e637fcd23c14f403acce3ce6fb499fe67ae"
	I1009 19:10:42.287632  315894 cri.go:89] found id: "15bb5b7e2aea03ec48909c33d667097b555ea1beda300a73041733a61f436fba"
	I1009 19:10:42.287635  315894 cri.go:89] found id: "25fa6a631cf5ebcc3e75fc1292e16818332b85feefb05dbbb9a8962dc8c78c4c"
	I1009 19:10:42.287638  315894 cri.go:89] found id: "e825a226b7970b13827f1d0f6d8171dd9b17f963474a471860abf157923f60a2"
	I1009 19:10:42.287640  315894 cri.go:89] found id: "41ed208d9dc423fa99e84419eeed75b7de5a35154d9b6e02990dcbb0688d825d"
	I1009 19:10:42.287642  315894 cri.go:89] found id: "38586711305d4d5467a57690b85f439546115ed88b85bcf5cdf1557dd965372c"
	I1009 19:10:42.287652  315894 cri.go:89] found id: "fd6967ac12320a239bd0bc04cbe40e0eafc4a2400e2d7394e4aa0beee94a909d"
	I1009 19:10:42.287654  315894 cri.go:89] found id: "374d81543731ae2abbcfd90e2e10cee92322e81a8d2db927946d09ce3ae5c3a0"
	I1009 19:10:42.287657  315894 cri.go:89] found id: "244419d73573aea3decd532df28789ce87590e18aed250d3c8bd97a51d266eec"
	I1009 19:10:42.287659  315894 cri.go:89] found id: "b658b1db1e6cf3b662016282f52969591a710492a35ce0744da68a350d352b87"
	I1009 19:10:42.287661  315894 cri.go:89] found id: "8e7ed8c9ce28f6dbb43df9728c096d21b26d87cd3715363a5981c05a658cd19d"
	I1009 19:10:42.287663  315894 cri.go:89] found id: "8bc00b8f6c8eca8ea53663c85793397575fd1b88a74a59a82e9da05447f458e6"
	I1009 19:10:42.287668  315894 cri.go:89] found id: "fdbde61a7e0073b06b9b594b54f92880a3b3d5399ab7a319e84ef85ccdffb7c1"
	I1009 19:10:42.287672  315894 cri.go:89] found id: ""
	I1009 19:10:42.287736  315894 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 19:10:42.301455  315894 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:10:42Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:10:42.301526  315894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:10:42.310138  315894 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:10:42.310148  315894 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:10:42.310201  315894 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:10:42.318421  315894 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:10:42.318983  315894 kubeconfig.go:125] found "functional-326957" server: "https://192.168.49.2:8441"
	I1009 19:10:42.320387  315894 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:10:42.328913  315894 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-09 19:08:41.122407776 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-09 19:10:41.391517032 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1009 19:10:42.328922  315894 kubeadm.go:1160] stopping kube-system containers ...
	I1009 19:10:42.328933  315894 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 19:10:42.328990  315894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:10:42.358553  315894 cri.go:89] found id: "1680d537a2bdf67ab1c9321ae281547a2e31cc37e708821613506227586b1f08"
	I1009 19:10:42.358565  315894 cri.go:89] found id: "b34076edffb2a372648789819ed99d771448ffc3d873aad69b250efd36604e80"
	I1009 19:10:42.358568  315894 cri.go:89] found id: "e3e083ec257ee110b9d09a9cb3508745336c2e629e8957c5fb0d653f53c806d9"
	I1009 19:10:42.358571  315894 cri.go:89] found id: "154d6982b95787f8573a7e3157af7e637fcd23c14f403acce3ce6fb499fe67ae"
	I1009 19:10:42.358579  315894 cri.go:89] found id: "15bb5b7e2aea03ec48909c33d667097b555ea1beda300a73041733a61f436fba"
	I1009 19:10:42.358582  315894 cri.go:89] found id: "25fa6a631cf5ebcc3e75fc1292e16818332b85feefb05dbbb9a8962dc8c78c4c"
	I1009 19:10:42.358584  315894 cri.go:89] found id: "e825a226b7970b13827f1d0f6d8171dd9b17f963474a471860abf157923f60a2"
	I1009 19:10:42.358586  315894 cri.go:89] found id: "41ed208d9dc423fa99e84419eeed75b7de5a35154d9b6e02990dcbb0688d825d"
	I1009 19:10:42.358588  315894 cri.go:89] found id: "38586711305d4d5467a57690b85f439546115ed88b85bcf5cdf1557dd965372c"
	I1009 19:10:42.358595  315894 cri.go:89] found id: "fd6967ac12320a239bd0bc04cbe40e0eafc4a2400e2d7394e4aa0beee94a909d"
	I1009 19:10:42.358597  315894 cri.go:89] found id: "374d81543731ae2abbcfd90e2e10cee92322e81a8d2db927946d09ce3ae5c3a0"
	I1009 19:10:42.358599  315894 cri.go:89] found id: "244419d73573aea3decd532df28789ce87590e18aed250d3c8bd97a51d266eec"
	I1009 19:10:42.358601  315894 cri.go:89] found id: "b658b1db1e6cf3b662016282f52969591a710492a35ce0744da68a350d352b87"
	I1009 19:10:42.358603  315894 cri.go:89] found id: "8e7ed8c9ce28f6dbb43df9728c096d21b26d87cd3715363a5981c05a658cd19d"
	I1009 19:10:42.358606  315894 cri.go:89] found id: "8bc00b8f6c8eca8ea53663c85793397575fd1b88a74a59a82e9da05447f458e6"
	I1009 19:10:42.358613  315894 cri.go:89] found id: "fdbde61a7e0073b06b9b594b54f92880a3b3d5399ab7a319e84ef85ccdffb7c1"
	I1009 19:10:42.358614  315894 cri.go:89] found id: ""
	I1009 19:10:42.358619  315894 cri.go:252] Stopping containers: [1680d537a2bdf67ab1c9321ae281547a2e31cc37e708821613506227586b1f08 b34076edffb2a372648789819ed99d771448ffc3d873aad69b250efd36604e80 e3e083ec257ee110b9d09a9cb3508745336c2e629e8957c5fb0d653f53c806d9 154d6982b95787f8573a7e3157af7e637fcd23c14f403acce3ce6fb499fe67ae 15bb5b7e2aea03ec48909c33d667097b555ea1beda300a73041733a61f436fba 25fa6a631cf5ebcc3e75fc1292e16818332b85feefb05dbbb9a8962dc8c78c4c e825a226b7970b13827f1d0f6d8171dd9b17f963474a471860abf157923f60a2 41ed208d9dc423fa99e84419eeed75b7de5a35154d9b6e02990dcbb0688d825d 38586711305d4d5467a57690b85f439546115ed88b85bcf5cdf1557dd965372c fd6967ac12320a239bd0bc04cbe40e0eafc4a2400e2d7394e4aa0beee94a909d 374d81543731ae2abbcfd90e2e10cee92322e81a8d2db927946d09ce3ae5c3a0 244419d73573aea3decd532df28789ce87590e18aed250d3c8bd97a51d266eec b658b1db1e6cf3b662016282f52969591a710492a35ce0744da68a350d352b87 8e7ed8c9ce28f6dbb43df9728c096d21b26d87cd3715363a5981c05a658cd19d 8bc00b8f6c8eca8ea53663c85793397575fd1b88a
74a59a82e9da05447f458e6 fdbde61a7e0073b06b9b594b54f92880a3b3d5399ab7a319e84ef85ccdffb7c1]
	I1009 19:10:42.358679  315894 ssh_runner.go:195] Run: which crictl
	I1009 19:10:42.362530  315894 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 1680d537a2bdf67ab1c9321ae281547a2e31cc37e708821613506227586b1f08 b34076edffb2a372648789819ed99d771448ffc3d873aad69b250efd36604e80 e3e083ec257ee110b9d09a9cb3508745336c2e629e8957c5fb0d653f53c806d9 154d6982b95787f8573a7e3157af7e637fcd23c14f403acce3ce6fb499fe67ae 15bb5b7e2aea03ec48909c33d667097b555ea1beda300a73041733a61f436fba 25fa6a631cf5ebcc3e75fc1292e16818332b85feefb05dbbb9a8962dc8c78c4c e825a226b7970b13827f1d0f6d8171dd9b17f963474a471860abf157923f60a2 41ed208d9dc423fa99e84419eeed75b7de5a35154d9b6e02990dcbb0688d825d 38586711305d4d5467a57690b85f439546115ed88b85bcf5cdf1557dd965372c fd6967ac12320a239bd0bc04cbe40e0eafc4a2400e2d7394e4aa0beee94a909d 374d81543731ae2abbcfd90e2e10cee92322e81a8d2db927946d09ce3ae5c3a0 244419d73573aea3decd532df28789ce87590e18aed250d3c8bd97a51d266eec b658b1db1e6cf3b662016282f52969591a710492a35ce0744da68a350d352b87 8e7ed8c9ce28f6dbb43df9728c096d21b26d87cd3715363a5981c05a658cd19d 8bc00b
8f6c8eca8ea53663c85793397575fd1b88a74a59a82e9da05447f458e6 fdbde61a7e0073b06b9b594b54f92880a3b3d5399ab7a319e84ef85ccdffb7c1
	I1009 19:10:42.465785  315894 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 19:10:42.584129  315894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:10:42.592271  315894 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  9 19:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  9 19:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct  9 19:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  9 19:08 /etc/kubernetes/scheduler.conf
	
	I1009 19:10:42.592363  315894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 19:10:42.600589  315894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 19:10:42.608656  315894 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:10:42.608711  315894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:10:42.616306  315894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 19:10:42.623969  315894 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:10:42.624063  315894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:10:42.631885  315894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 19:10:42.639854  315894 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:10:42.639913  315894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:10:42.647758  315894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:10:42.655808  315894 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:10:42.706620  315894 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:10:46.489291  315894 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.782647197s)
	I1009 19:10:46.489349  315894 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:10:46.714389  315894 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:10:46.776511  315894 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:10:46.838771  315894 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:10:46.838846  315894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:10:47.338928  315894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:10:47.838904  315894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:10:47.854917  315894 api_server.go:72] duration metric: took 1.016155389s to wait for apiserver process to appear ...
	I1009 19:10:47.854932  315894 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:10:47.854951  315894 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1009 19:10:51.166605  315894 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 19:10:51.166635  315894 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 19:10:51.166647  315894 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1009 19:10:51.347369  315894 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 19:10:51.347394  315894 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 19:10:51.355580  315894 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1009 19:10:51.451885  315894 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:10:51.451904  315894 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:10:51.855352  315894 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1009 19:10:51.867136  315894 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:10:51.867154  315894 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:10:52.355963  315894 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1009 19:10:52.370684  315894 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:10:52.370701  315894 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:10:52.855154  315894 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1009 19:10:52.871069  315894 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1009 19:10:52.887725  315894 api_server.go:141] control plane version: v1.34.1
	I1009 19:10:52.887742  315894 api_server.go:131] duration metric: took 5.032803688s to wait for apiserver health ...
	I1009 19:10:52.887750  315894 cni.go:84] Creating CNI manager for ""
	I1009 19:10:52.887755  315894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:10:52.892505  315894 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1009 19:10:52.895576  315894 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 19:10:52.900190  315894 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 19:10:52.900201  315894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 19:10:52.914727  315894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 19:10:53.419795  315894 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:10:53.423268  315894 system_pods.go:59] 8 kube-system pods found
	I1009 19:10:53.423287  315894 system_pods.go:61] "coredns-66bc5c9577-flmkw" [3c890b2e-f292-4f29-9cd1-f0d6e8fe3eb8] Running
	I1009 19:10:53.423296  315894 system_pods.go:61] "etcd-functional-326957" [fe05673f-9964-4bf7-8e9a-594fbcdca003] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:10:53.423300  315894 system_pods.go:61] "kindnet-xbr2b" [9a808b85-8790-403d-8460-8abc776e1041] Running
	I1009 19:10:53.423308  315894 system_pods.go:61] "kube-apiserver-functional-326957" [d1c2b407-5ede-4c7a-838c-52e933a0f97a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:10:53.423314  315894 system_pods.go:61] "kube-controller-manager-functional-326957" [11637aeb-978d-4238-9624-6bc4577c6c95] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:10:53.423318  315894 system_pods.go:61] "kube-proxy-pxp8x" [15565371-62bd-4a9c-b441-df26b1a67736] Running
	I1009 19:10:53.423324  315894 system_pods.go:61] "kube-scheduler-functional-326957" [52b22082-f21d-4e8f-984a-cea2837ae5ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:10:53.423327  315894 system_pods.go:61] "storage-provisioner" [4695b93b-e3b5-40a3-907a-12bc7bb678f9] Running
	I1009 19:10:53.423332  315894 system_pods.go:74] duration metric: took 3.526289ms to wait for pod list to return data ...
	I1009 19:10:53.423339  315894 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:10:53.425814  315894 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:10:53.425832  315894 node_conditions.go:123] node cpu capacity is 2
	I1009 19:10:53.425843  315894 node_conditions.go:105] duration metric: took 2.500619ms to run NodePressure ...
	I1009 19:10:53.425915  315894 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:10:53.686310  315894 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1009 19:10:53.690018  315894 kubeadm.go:743] kubelet initialised
	I1009 19:10:53.690028  315894 kubeadm.go:744] duration metric: took 3.705321ms waiting for restarted kubelet to initialise ...
	I1009 19:10:53.690042  315894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 19:10:53.699797  315894 ops.go:34] apiserver oom_adj: -16
	I1009 19:10:53.699809  315894 kubeadm.go:601] duration metric: took 11.389657198s to restartPrimaryControlPlane
	I1009 19:10:53.699817  315894 kubeadm.go:402] duration metric: took 11.444507846s to StartCluster
	I1009 19:10:53.699832  315894 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:10:53.699892  315894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:10:53.700490  315894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:10:53.700691  315894 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:10:53.700942  315894 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:10:53.700976  315894 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:10:53.701032  315894 addons.go:69] Setting storage-provisioner=true in profile "functional-326957"
	I1009 19:10:53.701045  315894 addons.go:238] Setting addon storage-provisioner=true in "functional-326957"
	W1009 19:10:53.701050  315894 addons.go:247] addon storage-provisioner should already be in state true
	I1009 19:10:53.701068  315894 host.go:66] Checking if "functional-326957" exists ...
	I1009 19:10:53.701137  315894 addons.go:69] Setting default-storageclass=true in profile "functional-326957"
	I1009 19:10:53.701151  315894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-326957"
	I1009 19:10:53.701461  315894 cli_runner.go:164] Run: docker container inspect functional-326957 --format={{.State.Status}}
	I1009 19:10:53.701522  315894 cli_runner.go:164] Run: docker container inspect functional-326957 --format={{.State.Status}}
	I1009 19:10:53.704031  315894 out.go:179] * Verifying Kubernetes components...
	I1009 19:10:53.707127  315894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:10:53.731495  315894 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:10:53.731883  315894 addons.go:238] Setting addon default-storageclass=true in "functional-326957"
	W1009 19:10:53.731893  315894 addons.go:247] addon default-storageclass should already be in state true
	I1009 19:10:53.731916  315894 host.go:66] Checking if "functional-326957" exists ...
	I1009 19:10:53.732312  315894 cli_runner.go:164] Run: docker container inspect functional-326957 --format={{.State.Status}}
	I1009 19:10:53.737832  315894 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:10:53.737845  315894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:10:53.737920  315894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326957
	I1009 19:10:53.758846  315894 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:10:53.758859  315894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:10:53.758931  315894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326957
	I1009 19:10:53.788691  315894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/functional-326957/id_rsa Username:docker}
	I1009 19:10:53.798048  315894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/functional-326957/id_rsa Username:docker}
	I1009 19:10:53.937791  315894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:10:53.944260  315894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:10:53.956784  315894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:10:54.157931  315894 node_ready.go:35] waiting up to 6m0s for node "functional-326957" to be "Ready" ...
	I1009 19:10:54.161985  315894 node_ready.go:49] node "functional-326957" is "Ready"
	I1009 19:10:54.162000  315894 node_ready.go:38] duration metric: took 4.051743ms for node "functional-326957" to be "Ready" ...
	I1009 19:10:54.162011  315894 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:10:54.162072  315894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:10:54.830084  315894 api_server.go:72] duration metric: took 1.129265261s to wait for apiserver process to appear ...
	I1009 19:10:54.830095  315894 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:10:54.830111  315894 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1009 19:10:54.833213  315894 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1009 19:10:54.835956  315894 addons.go:514] duration metric: took 1.134970657s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1009 19:10:54.840364  315894 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1009 19:10:54.841527  315894 api_server.go:141] control plane version: v1.34.1
	I1009 19:10:54.841541  315894 api_server.go:131] duration metric: took 11.440502ms to wait for apiserver health ...
	I1009 19:10:54.841549  315894 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:10:54.844932  315894 system_pods.go:59] 8 kube-system pods found
	I1009 19:10:54.844946  315894 system_pods.go:61] "coredns-66bc5c9577-flmkw" [3c890b2e-f292-4f29-9cd1-f0d6e8fe3eb8] Running
	I1009 19:10:54.844954  315894 system_pods.go:61] "etcd-functional-326957" [fe05673f-9964-4bf7-8e9a-594fbcdca003] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:10:54.844958  315894 system_pods.go:61] "kindnet-xbr2b" [9a808b85-8790-403d-8460-8abc776e1041] Running
	I1009 19:10:54.844965  315894 system_pods.go:61] "kube-apiserver-functional-326957" [d1c2b407-5ede-4c7a-838c-52e933a0f97a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:10:54.844970  315894 system_pods.go:61] "kube-controller-manager-functional-326957" [11637aeb-978d-4238-9624-6bc4577c6c95] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:10:54.844974  315894 system_pods.go:61] "kube-proxy-pxp8x" [15565371-62bd-4a9c-b441-df26b1a67736] Running
	I1009 19:10:54.844990  315894 system_pods.go:61] "kube-scheduler-functional-326957" [52b22082-f21d-4e8f-984a-cea2837ae5ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:10:54.844993  315894 system_pods.go:61] "storage-provisioner" [4695b93b-e3b5-40a3-907a-12bc7bb678f9] Running
	I1009 19:10:54.844999  315894 system_pods.go:74] duration metric: took 3.444165ms to wait for pod list to return data ...
	I1009 19:10:54.845005  315894 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:10:54.847774  315894 default_sa.go:45] found service account: "default"
	I1009 19:10:54.847787  315894 default_sa.go:55] duration metric: took 2.77734ms for default service account to be created ...
	I1009 19:10:54.847795  315894 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:10:54.850557  315894 system_pods.go:86] 8 kube-system pods found
	I1009 19:10:54.850572  315894 system_pods.go:89] "coredns-66bc5c9577-flmkw" [3c890b2e-f292-4f29-9cd1-f0d6e8fe3eb8] Running
	I1009 19:10:54.850580  315894 system_pods.go:89] "etcd-functional-326957" [fe05673f-9964-4bf7-8e9a-594fbcdca003] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:10:54.850586  315894 system_pods.go:89] "kindnet-xbr2b" [9a808b85-8790-403d-8460-8abc776e1041] Running
	I1009 19:10:54.850593  315894 system_pods.go:89] "kube-apiserver-functional-326957" [d1c2b407-5ede-4c7a-838c-52e933a0f97a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:10:54.850598  315894 system_pods.go:89] "kube-controller-manager-functional-326957" [11637aeb-978d-4238-9624-6bc4577c6c95] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:10:54.850602  315894 system_pods.go:89] "kube-proxy-pxp8x" [15565371-62bd-4a9c-b441-df26b1a67736] Running
	I1009 19:10:54.850607  315894 system_pods.go:89] "kube-scheduler-functional-326957" [52b22082-f21d-4e8f-984a-cea2837ae5ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:10:54.850610  315894 system_pods.go:89] "storage-provisioner" [4695b93b-e3b5-40a3-907a-12bc7bb678f9] Running
	I1009 19:10:54.850616  315894 system_pods.go:126] duration metric: took 2.817068ms to wait for k8s-apps to be running ...
	I1009 19:10:54.850622  315894 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:10:54.850679  315894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:10:54.866796  315894 system_svc.go:56] duration metric: took 16.163699ms WaitForService to wait for kubelet
	I1009 19:10:54.866815  315894 kubeadm.go:586] duration metric: took 1.166104511s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:10:54.866831  315894 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:10:54.869818  315894 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:10:54.869833  315894 node_conditions.go:123] node cpu capacity is 2
	I1009 19:10:54.869842  315894 node_conditions.go:105] duration metric: took 3.007513ms to run NodePressure ...
	I1009 19:10:54.869854  315894 start.go:242] waiting for startup goroutines ...
	I1009 19:10:54.869861  315894 start.go:247] waiting for cluster config update ...
	I1009 19:10:54.869871  315894 start.go:256] writing updated cluster config ...
	I1009 19:10:54.870159  315894 ssh_runner.go:195] Run: rm -f paused
	I1009 19:10:54.874169  315894 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:10:54.877953  315894 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-flmkw" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:10:54.882672  315894 pod_ready.go:94] pod "coredns-66bc5c9577-flmkw" is "Ready"
	I1009 19:10:54.882686  315894 pod_ready.go:86] duration metric: took 4.7203ms for pod "coredns-66bc5c9577-flmkw" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:10:54.885063  315894 pod_ready.go:83] waiting for pod "etcd-functional-326957" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 19:10:56.892583  315894 pod_ready.go:104] pod "etcd-functional-326957" is not "Ready", error: <nil>
	W1009 19:10:59.390223  315894 pod_ready.go:104] pod "etcd-functional-326957" is not "Ready", error: <nil>
	W1009 19:11:01.390362  315894 pod_ready.go:104] pod "etcd-functional-326957" is not "Ready", error: <nil>
	W1009 19:11:03.391983  315894 pod_ready.go:104] pod "etcd-functional-326957" is not "Ready", error: <nil>
	W1009 19:11:05.890429  315894 pod_ready.go:104] pod "etcd-functional-326957" is not "Ready", error: <nil>
	I1009 19:11:06.891515  315894 pod_ready.go:94] pod "etcd-functional-326957" is "Ready"
	I1009 19:11:06.891530  315894 pod_ready.go:86] duration metric: took 12.006455626s for pod "etcd-functional-326957" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:11:06.894088  315894 pod_ready.go:83] waiting for pod "kube-apiserver-functional-326957" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:11:06.899060  315894 pod_ready.go:94] pod "kube-apiserver-functional-326957" is "Ready"
	I1009 19:11:06.899073  315894 pod_ready.go:86] duration metric: took 4.972334ms for pod "kube-apiserver-functional-326957" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:11:06.901577  315894 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-326957" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:11:06.906386  315894 pod_ready.go:94] pod "kube-controller-manager-functional-326957" is "Ready"
	I1009 19:11:06.906402  315894 pod_ready.go:86] duration metric: took 4.81013ms for pod "kube-controller-manager-functional-326957" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:11:06.908837  315894 pod_ready.go:83] waiting for pod "kube-proxy-pxp8x" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:11:07.088969  315894 pod_ready.go:94] pod "kube-proxy-pxp8x" is "Ready"
	I1009 19:11:07.088983  315894 pod_ready.go:86] duration metric: took 180.134522ms for pod "kube-proxy-pxp8x" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:11:07.289066  315894 pod_ready.go:83] waiting for pod "kube-scheduler-functional-326957" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:11:07.689348  315894 pod_ready.go:94] pod "kube-scheduler-functional-326957" is "Ready"
	I1009 19:11:07.689363  315894 pod_ready.go:86] duration metric: took 400.284118ms for pod "kube-scheduler-functional-326957" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:11:07.689374  315894 pod_ready.go:40] duration metric: took 12.815172901s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:11:07.741286  315894 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 19:11:07.744561  315894 out.go:179] * Done! kubectl is now configured to use "functional-326957" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:11:47 functional-326957 crio[3545]: time="2025-10-09T19:11:47.033893029Z" level=info msg="Stopped pod sandbox (already stopped): b1b84005ecb2bd9b52601ae474f413151f2de78f2d83f9f6ff3c5d457006c64e" id=1b73b4ee-2820-421c-a97d-e072f50a16d9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:11:47 functional-326957 crio[3545]: time="2025-10-09T19:11:47.036135477Z" level=info msg="Removing pod sandbox: b1b84005ecb2bd9b52601ae474f413151f2de78f2d83f9f6ff3c5d457006c64e" id=43032a1a-d312-4ea9-baa8-9c7dbd3d7b55 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:11:47 functional-326957 crio[3545]: time="2025-10-09T19:11:47.040145721Z" level=info msg="Removed pod sandbox: b1b84005ecb2bd9b52601ae474f413151f2de78f2d83f9f6ff3c5d457006c64e" id=43032a1a-d312-4ea9-baa8-9c7dbd3d7b55 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:11:47 functional-326957 crio[3545]: time="2025-10-09T19:11:47.042454932Z" level=info msg="Stopping pod sandbox: 6e4aae2563830dff5427e699f34e922cca465b7fdbc57761e5758f3bf19d1c81" id=2ad034da-1fcb-4cb4-9dc9-d5c70f60efd9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:11:47 functional-326957 crio[3545]: time="2025-10-09T19:11:47.042665366Z" level=info msg="Stopped pod sandbox (already stopped): 6e4aae2563830dff5427e699f34e922cca465b7fdbc57761e5758f3bf19d1c81" id=2ad034da-1fcb-4cb4-9dc9-d5c70f60efd9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:11:47 functional-326957 crio[3545]: time="2025-10-09T19:11:47.04568183Z" level=info msg="Removing pod sandbox: 6e4aae2563830dff5427e699f34e922cca465b7fdbc57761e5758f3bf19d1c81" id=d1a4845d-c086-4eb8-bcb4-b97938ee5a39 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:11:47 functional-326957 crio[3545]: time="2025-10-09T19:11:47.050083978Z" level=info msg="Removed pod sandbox: 6e4aae2563830dff5427e699f34e922cca465b7fdbc57761e5758f3bf19d1c81" id=d1a4845d-c086-4eb8-bcb4-b97938ee5a39 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:11:47 functional-326957 crio[3545]: time="2025-10-09T19:11:47.863926366Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=78597959-1ae3-4db1-8cc6-a8ac6cb732ab name=/runtime.v1.ImageService/PullImage
	Oct 09 19:11:48 functional-326957 crio[3545]: time="2025-10-09T19:11:48.014615145Z" level=info msg="Running pod sandbox: default/hello-node-75c85bcc94-vwgnr/POD" id=99ec37a1-f867-4e1a-a8c4-0c30f3f63ec9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:11:48 functional-326957 crio[3545]: time="2025-10-09T19:11:48.014691911Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:11:48 functional-326957 crio[3545]: time="2025-10-09T19:11:48.030724799Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-vwgnr Namespace:default ID:da9f5491b5e4057c54596e3bd12b1e68bc672566aba7bb8ad36717e2bcc134f6 UID:5e35a642-ff0d-4179-97fb-35e4c1c36818 NetNS:/var/run/netns/ab7dedc7-2290-4de1-b4dc-5d6700e6b274 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000659400}] Aliases:map[]}"
	Oct 09 19:11:48 functional-326957 crio[3545]: time="2025-10-09T19:11:48.03076877Z" level=info msg="Adding pod default_hello-node-75c85bcc94-vwgnr to CNI network \"kindnet\" (type=ptp)"
	Oct 09 19:11:48 functional-326957 crio[3545]: time="2025-10-09T19:11:48.040332502Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-vwgnr Namespace:default ID:da9f5491b5e4057c54596e3bd12b1e68bc672566aba7bb8ad36717e2bcc134f6 UID:5e35a642-ff0d-4179-97fb-35e4c1c36818 NetNS:/var/run/netns/ab7dedc7-2290-4de1-b4dc-5d6700e6b274 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000659400}] Aliases:map[]}"
	Oct 09 19:11:48 functional-326957 crio[3545]: time="2025-10-09T19:11:48.0406816Z" level=info msg="Checking pod default_hello-node-75c85bcc94-vwgnr for CNI network kindnet (type=ptp)"
	Oct 09 19:11:48 functional-326957 crio[3545]: time="2025-10-09T19:11:48.044636657Z" level=info msg="Ran pod sandbox da9f5491b5e4057c54596e3bd12b1e68bc672566aba7bb8ad36717e2bcc134f6 with infra container: default/hello-node-75c85bcc94-vwgnr/POD" id=99ec37a1-f867-4e1a-a8c4-0c30f3f63ec9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:11:48 functional-326957 crio[3545]: time="2025-10-09T19:11:48.046482829Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1b89ae03-4046-4386-9806-5a54cdb177e5 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:12:01 functional-326957 crio[3545]: time="2025-10-09T19:12:01.86375516Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f2997726-736c-4f86-874a-03fdfd23a638 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:12:12 functional-326957 crio[3545]: time="2025-10-09T19:12:12.864730747Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=211ed07e-7993-47af-b516-eb14f19b8374 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:12:30 functional-326957 crio[3545]: time="2025-10-09T19:12:30.86407313Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=419bdda5-645f-489c-b0fa-a8ed61457083 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:13:02 functional-326957 crio[3545]: time="2025-10-09T19:13:02.864421Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a01d0f81-2c79-4b9a-969f-a0bef2f7d087 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:13:11 functional-326957 crio[3545]: time="2025-10-09T19:13:11.863608365Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=39b0db28-21fb-407c-b9b2-4f347582065d name=/runtime.v1.ImageService/PullImage
	Oct 09 19:14:32 functional-326957 crio[3545]: time="2025-10-09T19:14:32.864173471Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4cac82a7-eeec-45b9-9194-8ea788c92957 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:14:44 functional-326957 crio[3545]: time="2025-10-09T19:14:44.864413853Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=05d6ecfc-923f-4619-b5dd-9eaef2225c13 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:17:13 functional-326957 crio[3545]: time="2025-10-09T19:17:13.864370716Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=998b6984-9e20-4cd3-89fb-0ffdbe736c68 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:17:35 functional-326957 crio[3545]: time="2025-10-09T19:17:35.864266806Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a17aa974-b601-456b-a104-fb6dcdb8fefa name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7f76d32e348e3       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a   9 minutes ago       Running             myfrontend                0                   ce74a2bf0f79b       sp-pod                                      default
	a997bb2403377       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   7978db93236ca       nginx-svc                                   default
	7ea53ac65f031       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   1a26b931c11b0       kindnet-xbr2b                               kube-system
	56eacc570b5dc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   3a75bd49bd1ab       storage-provisioner                         kube-system
	03ac9a1121833       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   45a992a25c149       coredns-66bc5c9577-flmkw                    kube-system
	5a6fcf33b38c5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   a4a2e13391199       kube-proxy-pxp8x                            kube-system
	d335415a4f3c6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   f4eca3dcbd197       kube-apiserver-functional-326957            kube-system
	f37c7b9f5bc93       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   abb5c79cb100a       kube-controller-manager-functional-326957   kube-system
	0841bfd7b17e1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   511fb02015824       etcd-functional-326957                      kube-system
	bbc9b1046e622       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   c674bc43fdc14       kube-scheduler-functional-326957            kube-system
	1680d537a2bdf       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   511fb02015824       etcd-functional-326957                      kube-system
	b34076edffb2a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   3a75bd49bd1ab       storage-provisioner                         kube-system
	e3e083ec257ee       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   45a992a25c149       coredns-66bc5c9577-flmkw                    kube-system
	15bb5b7e2aea0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   c674bc43fdc14       kube-scheduler-functional-326957            kube-system
	25fa6a631cf5e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   1a26b931c11b0       kindnet-xbr2b                               kube-system
	e825a226b7970       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   a4a2e13391199       kube-proxy-pxp8x                            kube-system
	41ed208d9dc42       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   abb5c79cb100a       kube-controller-manager-functional-326957   kube-system
	
	
	==> coredns [03ac9a11218336c59ed951a73ecbaaca75b09c0ba436794e01a150b8741530ba] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45557 - 46627 "HINFO IN 786194165352673869.6389475064401415042. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013461392s
	
	
	==> coredns [e3e083ec257ee110b9d09a9cb3508745336c2e629e8957c5fb0d653f53c806d9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58629 - 53422 "HINFO IN 9185931811110143888.8349659625563677634. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034623619s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-326957
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-326957
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=functional-326957
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_09_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:08:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-326957
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:21:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:21:12 +0000   Thu, 09 Oct 2025 19:08:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:21:12 +0000   Thu, 09 Oct 2025 19:08:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:21:12 +0000   Thu, 09 Oct 2025 19:08:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:21:12 +0000   Thu, 09 Oct 2025 19:09:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-326957
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 3252c8dc1d664d41b6b1620c8848bcc5
	  System UUID:                eef7f0ca-65b9-4013-9967-98e2d67275d2
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-vwgnr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  default                     hello-node-connect-7d85dfc575-zwl9v          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 coredns-66bc5c9577-flmkw                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-326957                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-xbr2b                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-326957             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-326957    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-pxp8x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-326957             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-326957 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-326957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-326957 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-326957 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-326957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-326957 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node functional-326957 event: Registered Node functional-326957 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-326957 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-326957 event: Registered Node functional-326957 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-326957 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-326957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-326957 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-326957 event: Registered Node functional-326957 in Controller
	
	
	==> dmesg <==
	[Oct 9 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015195] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.531968] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036847] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.757016] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.932356] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 9 18:02] hrtimer: interrupt took 20603549 ns
	[Oct 9 18:59] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 9 19:02] overlayfs: idmapped layers are currently not supported
	[  +0.066862] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 9 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [0841bfd7b17e1c91b062d1527591f07b5ccb6e21380a3d789b85e8e62202ef44] <==
	{"level":"warn","ts":"2025-10-09T19:10:49.745202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:49.746485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:49.762207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:49.779763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:49.797417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:49.816277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:49.849432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:49.878107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:49.897380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:49.935241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:49.956481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:49.974623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:50.006404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:50.042559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:50.055171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:50.072330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:50.093049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:50.111550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:50.146218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:50.165281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:50.190069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:50.310778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34356","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-09T19:20:48.872079Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1127}
	{"level":"info","ts":"2025-10-09T19:20:48.896653Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1127,"took":"24.273104ms","hash":2396393193,"current-db-size-bytes":3289088,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1454080,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-09T19:20:48.896711Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2396393193,"revision":1127,"compact-revision":-1}
	
	
	==> etcd [1680d537a2bdf67ab1c9321ae281547a2e31cc37e708821613506227586b1f08] <==
	{"level":"warn","ts":"2025-10-09T19:10:02.400950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:02.415891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:02.459106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:02.498480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:02.508922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:02.553696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:10:02.636076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56730","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-09T19:10:28.757875Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-09T19:10:28.757919Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-326957","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-09T19:10:28.758003Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-09T19:10:28.899602Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-09T19:10:28.901184Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-09T19:10:28.901234Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-09T19:10:28.901226Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-09T19:10:28.901266Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-09T19:10:28.901275Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-09T19:10:28.901340Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-09T19:10:28.901356Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-09T19:10:28.901353Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-10-09T19:10:28.901363Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-09T19:10:28.901370Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-09T19:10:28.905670Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-09T19:10:28.905764Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-09T19:10:28.905795Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-09T19:10:28.905805Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-326957","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 19:21:35 up  2:03,  0 user,  load average: 0.16, 0.35, 1.34
	Linux functional-326957 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [25fa6a631cf5ebcc3e75fc1292e16818332b85feefb05dbbb9a8962dc8c78c4c] <==
	I1009 19:09:59.249088       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 19:09:59.265925       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1009 19:09:59.266075       1 main.go:148] setting mtu 1500 for CNI 
	I1009 19:09:59.266088       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 19:09:59.266100       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T19:09:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 19:09:59.520668       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 19:09:59.520786       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 19:09:59.520852       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 19:09:59.523493       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1009 19:10:04.221749       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 19:10:04.221815       1 metrics.go:72] Registering metrics
	I1009 19:10:04.221907       1 controller.go:711] "Syncing nftables rules"
	I1009 19:10:09.509492       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:10:09.509546       1 main.go:301] handling current node
	I1009 19:10:19.509027       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:10:19.509100       1 main.go:301] handling current node
	
	
	==> kindnet [7ea53ac65f031a9db7245b3ae455186c6aa63c66c833b50c90f08e62a6b83461] <==
	I1009 19:19:32.512118       1 main.go:301] handling current node
	I1009 19:19:42.512663       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:19:42.512776       1 main.go:301] handling current node
	I1009 19:19:52.510960       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:19:52.511098       1 main.go:301] handling current node
	I1009 19:20:02.511769       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:20:02.511876       1 main.go:301] handling current node
	I1009 19:20:12.511979       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:20:12.512013       1 main.go:301] handling current node
	I1009 19:20:22.519569       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:20:22.519603       1 main.go:301] handling current node
	I1009 19:20:32.511972       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:20:32.512007       1 main.go:301] handling current node
	I1009 19:20:42.513701       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:20:42.513736       1 main.go:301] handling current node
	I1009 19:20:52.517721       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:20:52.517840       1 main.go:301] handling current node
	I1009 19:21:02.511610       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:21:02.511645       1 main.go:301] handling current node
	I1009 19:21:12.510996       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:21:12.511115       1 main.go:301] handling current node
	I1009 19:21:22.520117       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:21:22.520153       1 main.go:301] handling current node
	I1009 19:21:32.512355       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:21:32.512391       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d335415a4f3c6b967c6da8ff4548ae15cb4963d7bd050ed23a8dccc8e3333b72] <==
	I1009 19:10:51.373417       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 19:10:51.373463       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 19:10:51.374144       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 19:10:51.374263       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 19:10:51.374467       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 19:10:51.387818       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 19:10:51.388030       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1009 19:10:51.405168       1 cache.go:39] Caches are synced for autoregister controller
	E1009 19:10:51.481788       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 19:10:51.905735       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 19:10:52.066939       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:10:53.411265       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1009 19:10:53.542446       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:10:53.614195       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:10:53.622074       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:11:09.726761       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 19:11:11.137192       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.239.55"}
	I1009 19:11:11.170938       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:11:22.874906       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.110.36.24"}
	I1009 19:11:33.327049       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 19:11:33.455880       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.169.39"}
	E1009 19:11:39.636283       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51676: use of closed network connection
	E1009 19:11:47.566153       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45120: use of closed network connection
	I1009 19:11:47.784021       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.60.165"}
	I1009 19:20:51.275063       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [41ed208d9dc423fa99e84419eeed75b7de5a35154d9b6e02990dcbb0688d825d] <==
	I1009 19:10:07.469482       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 19:10:07.469493       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 19:10:07.469504       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 19:10:07.475788       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:10:07.478241       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:10:07.480574       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 19:10:07.483887       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1009 19:10:07.483970       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 19:10:07.484002       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 19:10:07.484013       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 19:10:07.484020       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 19:10:07.487019       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 19:10:07.488199       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 19:10:07.489469       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1009 19:10:07.490685       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1009 19:10:07.492928       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1009 19:10:07.506296       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:10:07.517878       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 19:10:07.518162       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1009 19:10:07.518229       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1009 19:10:07.519272       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1009 19:10:07.519286       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 19:10:07.519298       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 19:10:07.523475       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 19:10:07.526872       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	
	
	==> kube-controller-manager [f37c7b9f5bc93b4aaeb2312e7b0dce2c2e18da69c7dc98ac0a468d4d71b374fe] <==
	I1009 19:10:54.608005       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1009 19:10:54.616141       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1009 19:10:54.616382       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 19:10:54.616499       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:10:54.616604       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:10:54.616634       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:10:54.616515       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 19:10:54.622137       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 19:10:54.622297       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:10:54.623505       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 19:10:54.625524       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1009 19:10:54.625617       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 19:10:54.626027       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 19:10:54.629184       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 19:10:54.629388       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 19:10:54.626305       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1009 19:10:54.628021       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1009 19:10:54.628156       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 19:10:54.634536       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 19:10:54.638486       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 19:10:54.635594       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 19:10:54.628345       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1009 19:10:54.651818       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1009 19:10:54.654921       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 19:10:54.678082       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [5a6fcf33b38c5f103fc76b31c6cedf5cca066fb195829d19c59a7192a372d504] <==
	I1009 19:10:52.362263       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:10:52.474771       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:10:52.575577       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:10:52.575619       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1009 19:10:52.575713       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:10:52.597545       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:10:52.597599       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:10:52.608084       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:10:52.619313       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:10:52.619545       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:10:52.620788       1 config.go:200] "Starting service config controller"
	I1009 19:10:52.620860       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:10:52.620908       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:10:52.620935       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:10:52.620969       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:10:52.620996       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:10:52.624805       1 config.go:309] "Starting node config controller"
	I1009 19:10:52.624865       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:10:52.624906       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:10:52.721785       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:10:52.721886       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:10:52.721963       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e825a226b7970b13827f1d0f6d8171dd9b17f963474a471860abf157923f60a2] <==
	I1009 19:10:01.937057       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:10:03.396629       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:10:04.219348       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:10:04.219372       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1009 19:10:04.219431       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:10:04.882679       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:10:04.882751       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:10:04.957644       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:10:04.958045       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:10:04.958267       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:10:04.959625       1 config.go:200] "Starting service config controller"
	I1009 19:10:04.959693       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:10:04.959743       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:10:04.959772       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:10:04.959810       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:10:04.959841       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:10:05.089865       1 config.go:309] "Starting node config controller"
	I1009 19:10:05.089901       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:10:05.089910       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:10:05.163914       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:10:05.163967       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:10:05.163993       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [15bb5b7e2aea03ec48909c33d667097b555ea1beda300a73041733a61f436fba] <==
	I1009 19:10:02.423994       1 serving.go:386] Generated self-signed cert in-memory
	I1009 19:10:05.239581       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 19:10:05.239696       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:10:05.248361       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 19:10:05.248531       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 19:10:05.248576       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 19:10:05.249264       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:10:05.251591       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:10:05.251626       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:10:05.251645       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:10:05.251655       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:10:05.350204       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 19:10:05.352760       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:10:05.352834       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:10:28.763466       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1009 19:10:28.763485       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1009 19:10:28.763507       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1009 19:10:28.763532       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:10:28.763553       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:10:28.763567       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1009 19:10:28.763843       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1009 19:10:28.763869       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bbc9b1046e622d38265fce1fbdafa595ae78eb7c706b721a48e4da0da31b6e22] <==
	I1009 19:10:50.738920       1 serving.go:386] Generated self-signed cert in-memory
	I1009 19:10:51.751023       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 19:10:51.751068       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:10:51.757379       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 19:10:51.757437       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 19:10:51.757500       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:10:51.757516       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:10:51.757530       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:10:51.757546       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:10:51.758055       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 19:10:51.758227       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:10:51.857904       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 19:10:51.858174       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:10:51.858295       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 09 19:19:02 functional-326957 kubelet[3855]: E1009 19:19:02.864217    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zwl9v" podUID="23d5a564-ebfc-4fd5-959e-08300e0e452f"
	Oct 09 19:19:14 functional-326957 kubelet[3855]: E1009 19:19:14.865162    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zwl9v" podUID="23d5a564-ebfc-4fd5-959e-08300e0e452f"
	Oct 09 19:19:14 functional-326957 kubelet[3855]: E1009 19:19:14.865773    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vwgnr" podUID="5e35a642-ff0d-4179-97fb-35e4c1c36818"
	Oct 09 19:19:26 functional-326957 kubelet[3855]: E1009 19:19:26.864110    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zwl9v" podUID="23d5a564-ebfc-4fd5-959e-08300e0e452f"
	Oct 09 19:19:27 functional-326957 kubelet[3855]: E1009 19:19:27.863480    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vwgnr" podUID="5e35a642-ff0d-4179-97fb-35e4c1c36818"
	Oct 09 19:19:37 functional-326957 kubelet[3855]: E1009 19:19:37.863486    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zwl9v" podUID="23d5a564-ebfc-4fd5-959e-08300e0e452f"
	Oct 09 19:19:41 functional-326957 kubelet[3855]: E1009 19:19:41.863176    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vwgnr" podUID="5e35a642-ff0d-4179-97fb-35e4c1c36818"
	Oct 09 19:19:48 functional-326957 kubelet[3855]: E1009 19:19:48.864288    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zwl9v" podUID="23d5a564-ebfc-4fd5-959e-08300e0e452f"
	Oct 09 19:19:54 functional-326957 kubelet[3855]: E1009 19:19:54.865771    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vwgnr" podUID="5e35a642-ff0d-4179-97fb-35e4c1c36818"
	Oct 09 19:19:59 functional-326957 kubelet[3855]: E1009 19:19:59.863413    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zwl9v" podUID="23d5a564-ebfc-4fd5-959e-08300e0e452f"
	Oct 09 19:20:05 functional-326957 kubelet[3855]: E1009 19:20:05.864207    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vwgnr" podUID="5e35a642-ff0d-4179-97fb-35e4c1c36818"
	Oct 09 19:20:10 functional-326957 kubelet[3855]: E1009 19:20:10.864072    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zwl9v" podUID="23d5a564-ebfc-4fd5-959e-08300e0e452f"
	Oct 09 19:20:19 functional-326957 kubelet[3855]: E1009 19:20:19.864030    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vwgnr" podUID="5e35a642-ff0d-4179-97fb-35e4c1c36818"
	Oct 09 19:20:25 functional-326957 kubelet[3855]: E1009 19:20:25.863854    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zwl9v" podUID="23d5a564-ebfc-4fd5-959e-08300e0e452f"
	Oct 09 19:20:32 functional-326957 kubelet[3855]: E1009 19:20:32.863894    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vwgnr" podUID="5e35a642-ff0d-4179-97fb-35e4c1c36818"
	Oct 09 19:20:40 functional-326957 kubelet[3855]: E1009 19:20:40.863902    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zwl9v" podUID="23d5a564-ebfc-4fd5-959e-08300e0e452f"
	Oct 09 19:20:46 functional-326957 kubelet[3855]: E1009 19:20:46.864103    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vwgnr" podUID="5e35a642-ff0d-4179-97fb-35e4c1c36818"
	Oct 09 19:20:53 functional-326957 kubelet[3855]: E1009 19:20:53.863326    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zwl9v" podUID="23d5a564-ebfc-4fd5-959e-08300e0e452f"
	Oct 09 19:21:00 functional-326957 kubelet[3855]: E1009 19:21:00.863900    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vwgnr" podUID="5e35a642-ff0d-4179-97fb-35e4c1c36818"
	Oct 09 19:21:06 functional-326957 kubelet[3855]: E1009 19:21:06.864727    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zwl9v" podUID="23d5a564-ebfc-4fd5-959e-08300e0e452f"
	Oct 09 19:21:12 functional-326957 kubelet[3855]: E1009 19:21:12.864248    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vwgnr" podUID="5e35a642-ff0d-4179-97fb-35e4c1c36818"
	Oct 09 19:21:18 functional-326957 kubelet[3855]: E1009 19:21:18.864049    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zwl9v" podUID="23d5a564-ebfc-4fd5-959e-08300e0e452f"
	Oct 09 19:21:23 functional-326957 kubelet[3855]: E1009 19:21:23.864191    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vwgnr" podUID="5e35a642-ff0d-4179-97fb-35e4c1c36818"
	Oct 09 19:21:32 functional-326957 kubelet[3855]: E1009 19:21:32.864807    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zwl9v" podUID="23d5a564-ebfc-4fd5-959e-08300e0e452f"
	Oct 09 19:21:35 functional-326957 kubelet[3855]: E1009 19:21:35.863393    3855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vwgnr" podUID="5e35a642-ff0d-4179-97fb-35e4c1c36818"
	
	
	==> storage-provisioner [56eacc570b5dcb0181305d0a368ff090e77fa288a1bd32a32fa7691dc32d056c] <==
	W1009 19:21:10.663710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:12.666950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:12.671842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:14.675876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:14.680726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:16.684122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:16.689527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:18.693135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:18.700042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:20.703604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:20.708346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:22.712199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:22.716884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:24.720738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:24.725290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:26.728727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:26.735721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:28.738410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:28.742889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:30.747080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:30.752105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:32.755657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:32.762816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:34.765589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:21:34.770338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b34076edffb2a372648789819ed99d771448ffc3d873aad69b250efd36604e80] <==
	I1009 19:10:00.966976       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 19:10:04.295913       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 19:10:04.295967       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 19:10:04.319014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:10:07.778603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:10:12.039474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:10:15.638260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:10:18.691805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:10:21.714252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:10:21.719289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:10:21.719457       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 19:10:21.719631       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-326957_9a5fa30f-1efa-498b-acbf-80c4e2785a6e!
	I1009 19:10:21.721565       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df569fbe-cf80-44ea-81bc-01ec0282d9f6", APIVersion:"v1", ResourceVersion:"570", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-326957_9a5fa30f-1efa-498b-acbf-80c4e2785a6e became leader
	W1009 19:10:21.724074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:10:21.729880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 19:10:21.820393       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-326957_9a5fa30f-1efa-498b-acbf-80c4e2785a6e!
	W1009 19:10:23.732649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:10:23.740354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:10:25.789051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:10:25.828018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:10:27.831548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 19:10:27.837251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-326957 -n functional-326957
helpers_test.go:269: (dbg) Run:  kubectl --context functional-326957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-vwgnr hello-node-connect-7d85dfc575-zwl9v
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-326957 describe pod hello-node-75c85bcc94-vwgnr hello-node-connect-7d85dfc575-zwl9v
helpers_test.go:290: (dbg) kubectl --context functional-326957 describe pod hello-node-75c85bcc94-vwgnr hello-node-connect-7d85dfc575-zwl9v:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-vwgnr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326957/192.168.49.2
	Start Time:       Thu, 09 Oct 2025 19:11:47 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jfkl7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jfkl7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m49s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-vwgnr to functional-326957
	  Normal   Pulling    6m52s (x5 over 9m48s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m52s (x5 over 9m48s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m52s (x5 over 9m48s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m41s (x20 over 9m48s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m28s (x21 over 9m48s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-zwl9v
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326957/192.168.49.2
	Start Time:       Thu, 09 Oct 2025 19:11:33 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tglk4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tglk4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-zwl9v to functional-326957
	  Normal   Pulling    7m4s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m4s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m4s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    5m1s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     5m1s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-arm64 -p functional-326957 image ls --format short --alsologtostderr: (2.297851209s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-326957 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-326957 image ls --format short --alsologtostderr:
I1009 19:21:57.138114  324872 out.go:360] Setting OutFile to fd 1 ...
I1009 19:21:57.138220  324872 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:21:57.138226  324872 out.go:374] Setting ErrFile to fd 2...
I1009 19:21:57.138231  324872 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:21:57.138586  324872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
I1009 19:21:57.139550  324872 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:21:57.139673  324872 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:21:57.140365  324872 cli_runner.go:164] Run: docker container inspect functional-326957 --format={{.State.Status}}
I1009 19:21:57.179338  324872 ssh_runner.go:195] Run: systemctl --version
I1009 19:21:57.179433  324872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326957
I1009 19:21:57.206716  324872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/functional-326957/id_rsa Username:docker}
I1009 19:21:57.312775  324872 ssh_runner.go:195] Run: sudo crictl images --output json
I1009 19:21:59.344728  324872 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.031922744s)
W1009 19:21:59.344794  324872 cache_images.go:735] Failed to list images for profile functional-326957 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1009 19:21:59.341958    7126 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="image:{}"
time="2025-10-09T19:21:59Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 image load --daemon kicbase/echo-server:functional-326957 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-326957" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 image load --daemon kicbase/echo-server:functional-326957 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-326957" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-326957
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 image load --daemon kicbase/echo-server:functional-326957 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-326957" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 image save kicbase/echo-server:functional-326957 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1009 19:11:21.179757  319392 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:11:21.180614  319392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:11:21.180631  319392 out.go:374] Setting ErrFile to fd 2...
	I1009 19:11:21.180661  319392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:11:21.181150  319392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:11:21.182252  319392 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:11:21.182441  319392 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:11:21.182960  319392 cli_runner.go:164] Run: docker container inspect functional-326957 --format={{.State.Status}}
	I1009 19:11:21.201546  319392 ssh_runner.go:195] Run: systemctl --version
	I1009 19:11:21.201601  319392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326957
	I1009 19:11:21.233333  319392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/functional-326957/id_rsa Username:docker}
	I1009 19:11:21.336686  319392 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1009 19:11:21.336746  319392 cache_images.go:254] Failed to load cached images for "functional-326957": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1009 19:11:21.336768  319392 cache_images.go:266] failed pushing to: functional-326957

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-326957
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 image save --daemon kicbase/echo-server:functional-326957 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-326957
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-326957: exit status 1 (28.581106ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-326957

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-326957

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (601.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-326957 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-326957 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-vwgnr" [5e35a642-ff0d-4179-97fb-35e4c1c36818] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1009 19:11:58.597451  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:14:14.731409  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:14:42.439801  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:19:14.730733  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-326957 -n functional-326957
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-09 19:21:48.36629353 +0000 UTC m=+1311.971091292
functional_test.go:1460: (dbg) Run:  kubectl --context functional-326957 describe po hello-node-75c85bcc94-vwgnr -n default
functional_test.go:1460: (dbg) kubectl --context functional-326957 describe po hello-node-75c85bcc94-vwgnr -n default:
Name:             hello-node-75c85bcc94-vwgnr
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-326957/192.168.49.2
Start Time:       Thu, 09 Oct 2025 19:11:47 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jfkl7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-jfkl7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-vwgnr to functional-326957
Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m53s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m40s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-326957 logs hello-node-75c85bcc94-vwgnr -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-326957 logs hello-node-75c85bcc94-vwgnr -n default: exit status 1 (100.408409ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-vwgnr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-326957 logs hello-node-75c85bcc94-vwgnr -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (601.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-326957 service --namespace=default --https --url hello-node: exit status 115 (551.570637ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32312
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-326957 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-326957 service hello-node --url --format={{.IP}}: exit status 115 (529.114232ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-326957 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-326957 service hello-node --url: exit status 115 (581.066029ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32312
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-326957 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32312
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (535.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-807463 stop --alsologtostderr -v 5: (26.995265652s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 start --wait true --alsologtostderr -v 5
E1009 19:27:43.912255  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:29:05.833571  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:29:14.730636  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:31:21.974380  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:31:49.675063  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:34:14.730706  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-807463 start --wait true --alsologtostderr -v 5: exit status 80 (8m25.107550684s)

                                                
                                                
-- stdout --
	* [ha-807463] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-807463" primary control-plane node in "ha-807463" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Enabled addons: 
	
	* Starting "ha-807463-m02" control-plane node in "ha-807463" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-807463-m03" control-plane node in "ha-807463" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:27:31.218830  343307 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:27:31.218980  343307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:31.218993  343307 out.go:374] Setting ErrFile to fd 2...
	I1009 19:27:31.219013  343307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:31.219307  343307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:27:31.219769  343307 out.go:368] Setting JSON to false
	I1009 19:27:31.220680  343307 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7791,"bootTime":1760030261,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 19:27:31.220751  343307 start.go:143] virtualization:  
	I1009 19:27:31.225902  343307 out.go:179] * [ha-807463] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:27:31.229045  343307 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:27:31.229154  343307 notify.go:221] Checking for updates...
	I1009 19:27:31.235436  343307 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:27:31.238296  343307 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:27:31.241057  343307 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 19:27:31.243947  343307 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:27:31.246781  343307 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:27:31.250030  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:31.250184  343307 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:27:31.286472  343307 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:27:31.286604  343307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:31.343705  343307 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-09 19:27:31.334706362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:27:31.343816  343307 docker.go:319] overlay module found
	I1009 19:27:31.346870  343307 out.go:179] * Using the docker driver based on existing profile
	I1009 19:27:31.349767  343307 start.go:309] selected driver: docker
	I1009 19:27:31.349786  343307 start.go:930] validating driver "docker" against &{Name:ha-807463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:31.349926  343307 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:27:31.350028  343307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:31.412249  343307 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-09 19:27:31.403030574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:27:31.412653  343307 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:27:31.412689  343307 cni.go:84] Creating CNI manager for ""
	I1009 19:27:31.412755  343307 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1009 19:27:31.412799  343307 start.go:353] cluster config:
	{Name:ha-807463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:31.417709  343307 out.go:179] * Starting "ha-807463" primary control-plane node in "ha-807463" cluster
	I1009 19:27:31.420530  343307 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:31.423466  343307 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:31.426321  343307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:31.426392  343307 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:27:31.426406  343307 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:31.426410  343307 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:31.426490  343307 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:27:31.426508  343307 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:31.426650  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:31.445925  343307 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:31.445951  343307 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:31.445969  343307 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:31.446007  343307 start.go:361] acquireMachinesLock for ha-807463: {Name:mk7b03a6b271157d59e205354be444442bc66672 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:31.446069  343307 start.go:365] duration metric: took 41.674µs to acquireMachinesLock for "ha-807463"
	I1009 19:27:31.446095  343307 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:27:31.446101  343307 fix.go:55] fixHost starting: 
	I1009 19:27:31.446358  343307 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:27:31.463339  343307 fix.go:113] recreateIfNeeded on ha-807463: state=Stopped err=<nil>
	W1009 19:27:31.463369  343307 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:27:31.466724  343307 out.go:252] * Restarting existing docker container for "ha-807463" ...
	I1009 19:27:31.466808  343307 cli_runner.go:164] Run: docker start ha-807463
	I1009 19:27:31.729554  343307 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:27:31.752533  343307 kic.go:430] container "ha-807463" state is running.
	I1009 19:27:31.752940  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463
	I1009 19:27:31.776613  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:31.776858  343307 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:31.776933  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:31.798253  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:31.798586  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1009 19:27:31.798603  343307 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:31.799247  343307 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 19:27:34.945362  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463
	
	I1009 19:27:34.945397  343307 ubuntu.go:182] provisioning hostname "ha-807463"
	I1009 19:27:34.945467  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:34.962891  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:34.963208  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1009 19:27:34.963226  343307 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-807463 && echo "ha-807463" | sudo tee /etc/hostname
	I1009 19:27:35.120375  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463
	
	I1009 19:27:35.120459  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:35.138932  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:35.139244  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1009 19:27:35.139259  343307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-807463' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-807463/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-807463' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:35.285402  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:35.285451  343307 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 19:27:35.285478  343307 ubuntu.go:190] setting up certificates
	I1009 19:27:35.285488  343307 provision.go:84] configureAuth start
	I1009 19:27:35.285558  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463
	I1009 19:27:35.302829  343307 provision.go:143] copyHostCerts
	I1009 19:27:35.302873  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:27:35.302904  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 19:27:35.302917  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:27:35.303005  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 19:27:35.303096  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:27:35.303118  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 19:27:35.303127  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:27:35.303156  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 19:27:35.303204  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:27:35.303225  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 19:27:35.303230  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:27:35.303255  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 19:27:35.303308  343307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.ha-807463 san=[127.0.0.1 192.168.49.2 ha-807463 localhost minikube]
	I1009 19:27:35.901224  343307 provision.go:177] copyRemoteCerts
	I1009 19:27:35.901289  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:35.901355  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:35.918214  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.021624  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:36.021693  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:36.040520  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:36.040583  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:27:36.059254  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:36.059315  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:27:36.078084  343307 provision.go:87] duration metric: took 792.56918ms to configureAuth
	I1009 19:27:36.078112  343307 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:36.078344  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:36.078465  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.095675  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:36.095992  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1009 19:27:36.096012  343307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:36.425006  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:36.425081  343307 machine.go:96] duration metric: took 4.648205511s to provisionDockerMachine
	I1009 19:27:36.425141  343307 start.go:294] postStartSetup for "ha-807463" (driver="docker")
	I1009 19:27:36.425177  343307 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:36.425298  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:36.425384  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.449453  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.553510  343307 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:36.557246  343307 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:36.557278  343307 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:36.557290  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 19:27:36.557367  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 19:27:36.557489  343307 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 19:27:36.557501  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /etc/ssl/certs/2960022.pem
	I1009 19:27:36.557607  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:36.565210  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:27:36.583083  343307 start.go:297] duration metric: took 157.903278ms for postStartSetup
	I1009 19:27:36.583210  343307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:36.583282  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.600612  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.698274  343307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:36.703016  343307 fix.go:57] duration metric: took 5.256907577s for fixHost
	I1009 19:27:36.703042  343307 start.go:84] releasing machines lock for "ha-807463", held for 5.256957103s
	I1009 19:27:36.703115  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463
	I1009 19:27:36.720370  343307 ssh_runner.go:195] Run: cat /version.json
	I1009 19:27:36.720385  343307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:36.720422  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.720451  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.743233  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.753326  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.948710  343307 ssh_runner.go:195] Run: systemctl --version
	I1009 19:27:36.955436  343307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:36.994992  343307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:37.001157  343307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:37.001242  343307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:37.015899  343307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:27:37.015931  343307 start.go:496] detecting cgroup driver to use...
	I1009 19:27:37.016002  343307 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:27:37.016099  343307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:37.034350  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:37.049609  343307 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:37.049706  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:37.065757  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:37.079370  343307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:37.204726  343307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:37.324926  343307 docker.go:234] disabling docker service ...
	I1009 19:27:37.325051  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:37.340669  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:37.354186  343307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:37.468499  343307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:37.609321  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:37.623308  343307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:37.638872  343307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:37.638957  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.648255  343307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:27:37.648376  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.658302  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.667181  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.675984  343307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:37.685440  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.694680  343307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.702750  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.711421  343307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:37.719182  343307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:37.727483  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:37.841375  343307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:37.980708  343307 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:37.980812  343307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:37.984807  343307 start.go:564] Will wait 60s for crictl version
	I1009 19:27:37.984933  343307 ssh_runner.go:195] Run: which crictl
	I1009 19:27:37.988572  343307 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:38.021983  343307 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:38.022073  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:27:38.052703  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:27:38.085238  343307 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:38.088088  343307 cli_runner.go:164] Run: docker network inspect ha-807463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:38.104470  343307 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:38.108353  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:38.118588  343307 kubeadm.go:883] updating cluster {Name:ha-807463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:27:38.118741  343307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:38.118810  343307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:38.155316  343307 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:38.155341  343307 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:27:38.155400  343307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:38.184223  343307 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:38.184246  343307 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:27:38.184257  343307 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:27:38.184370  343307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-807463 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:38.184448  343307 ssh_runner.go:195] Run: crio config
	I1009 19:27:38.252414  343307 cni.go:84] Creating CNI manager for ""
	I1009 19:27:38.252436  343307 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1009 19:27:38.252454  343307 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:27:38.252488  343307 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-807463 NodeName:ha-807463 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:27:38.252634  343307 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-807463"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:27:38.252656  343307 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:38.252721  343307 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:38.265014  343307 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:38.265147  343307 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:38.265209  343307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:38.272978  343307 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:38.273096  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:27:38.280861  343307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:27:38.294726  343307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:38.307657  343307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1009 19:27:38.320684  343307 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1009 19:27:38.333393  343307 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:38.337014  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:38.346725  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:38.455808  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:38.472442  343307 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463 for IP: 192.168.49.2
	I1009 19:27:38.472472  343307 certs.go:195] generating shared ca certs ...
	I1009 19:27:38.472489  343307 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:38.472635  343307 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 19:27:38.472702  343307 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 19:27:38.472715  343307 certs.go:257] generating profile certs ...
	I1009 19:27:38.472790  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key
	I1009 19:27:38.472829  343307 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.2f140c92
	I1009 19:27:38.472846  343307 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt.2f140c92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1009 19:27:38.846814  343307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt.2f140c92 ...
	I1009 19:27:38.846850  343307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt.2f140c92: {Name:mkc2191acbc8bdf29d69f0113598f387f3156525 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:38.847045  343307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.2f140c92 ...
	I1009 19:27:38.847059  343307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.2f140c92: {Name:mk4420d6a062c4dab2900704e5add4b492d36555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:38.847148  343307 certs.go:382] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt.2f140c92 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt
	I1009 19:27:38.847292  343307 certs.go:386] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.2f140c92 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key
	I1009 19:27:38.847425  343307 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key
	I1009 19:27:38.847442  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:38.847458  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:38.847476  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:38.847488  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:38.847504  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:38.847525  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:38.847541  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:38.847559  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:38.847611  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 19:27:38.847645  343307 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:38.847656  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:27:38.847681  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:38.847709  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:38.847733  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 19:27:38.847781  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:27:38.847811  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:38.847826  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem -> /usr/share/ca-certificates/296002.pem
	I1009 19:27:38.847838  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /usr/share/ca-certificates/2960022.pem
	I1009 19:27:38.848384  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:38.867598  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:38.888313  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:38.908288  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:27:38.929572  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 19:27:38.949045  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:27:38.966969  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:38.986319  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:27:39.012715  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:39.032678  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 19:27:39.051431  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 19:27:39.069614  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:27:39.090445  343307 ssh_runner.go:195] Run: openssl version
	I1009 19:27:39.098940  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:39.108430  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:39.119839  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:39.119907  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:39.188461  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:39.197309  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 19:27:39.212076  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 19:27:39.218737  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 19:27:39.218850  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 19:27:39.320003  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:39.338511  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 19:27:39.353078  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 19:27:39.358619  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 19:27:39.358736  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 19:27:39.417831  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:39.430407  343307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:39.437508  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:27:39.502060  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:27:39.549190  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:27:39.599910  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:27:39.657699  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:27:39.729015  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:27:39.791014  343307 kubeadm.go:400] StartCluster: {Name:ha-807463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:39.791208  343307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:27:39.791318  343307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:27:39.827907  343307 cri.go:89] found id: "9d475a483e7023b214d8a1506f2ba793d2cb34e4e0e7b5f0fc49d91b875116f7"
	I1009 19:27:39.827980  343307 cri.go:89] found id: "eb3eb3edb2fff30f90b98210a15c7960a0d8f4700c380a4bc2a236e3530d4043"
	I1009 19:27:39.828002  343307 cri.go:89] found id: "e4593fb70e6dd0047bc83f89897d4c1ad23896e5ca9a3628c4bbeea360f8cbaf"
	I1009 19:27:39.828027  343307 cri.go:89] found id: "60abd5bf9ea13b7e15b4cb133643cb620ae0f536d45d6ac30703be2e3ef7a45f"
	I1009 19:27:39.828064  343307 cri.go:89] found id: "4477522bd8536fe09afcc2397cd8beb927ccd19a6714098fb7bb1f3ef47595ea"
	I1009 19:27:39.828090  343307 cri.go:89] found id: ""
	I1009 19:27:39.828175  343307 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 19:27:39.846495  343307 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:27:39Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:27:39.846575  343307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:27:39.873447  343307 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:27:39.873525  343307 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:27:39.873618  343307 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:27:39.890893  343307 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:39.891370  343307 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-807463" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:27:39.891541  343307 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-294150/kubeconfig needs updating (will repair): [kubeconfig missing "ha-807463" cluster setting kubeconfig missing "ha-807463" context setting]
	I1009 19:27:39.891898  343307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:39.892555  343307 kapi.go:59] client config for ha-807463: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key", CAFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:27:39.893429  343307 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:27:39.893485  343307 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:27:39.893506  343307 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:27:39.893530  343307 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:27:39.893571  343307 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:27:39.894036  343307 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:27:39.894259  343307 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:27:39.909848  343307 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:27:39.909926  343307 kubeadm.go:601] duration metric: took 36.380579ms to restartPrimaryControlPlane
	I1009 19:27:39.909962  343307 kubeadm.go:402] duration metric: took 118.974675ms to StartCluster
	I1009 19:27:39.909997  343307 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:39.910102  343307 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:27:39.910819  343307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:39.911409  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:39.911493  343307 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:39.911613  343307 start.go:242] waiting for startup goroutines ...
	I1009 19:27:39.911544  343307 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:27:39.917562  343307 out.go:179] * Enabled addons: 
	I1009 19:27:39.920371  343307 addons.go:514] duration metric: took 8.815745ms for enable addons: enabled=[]
	I1009 19:27:39.920465  343307 start.go:247] waiting for cluster config update ...
	I1009 19:27:39.920489  343307 start.go:256] writing updated cluster config ...
	I1009 19:27:39.924923  343307 out.go:203] 
	I1009 19:27:39.928045  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:39.928167  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:39.931505  343307 out.go:179] * Starting "ha-807463-m02" control-plane node in "ha-807463" cluster
	I1009 19:27:39.934402  343307 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:39.937316  343307 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:39.940080  343307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:39.940107  343307 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:39.940210  343307 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:27:39.940220  343307 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:39.940348  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:39.940566  343307 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:39.975622  343307 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:39.975643  343307 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:39.975657  343307 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:39.975682  343307 start.go:361] acquireMachinesLock for ha-807463-m02: {Name:mk6ba8ff733306501b688f1b4a216ac9e405e90f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:39.975736  343307 start.go:365] duration metric: took 39.187µs to acquireMachinesLock for "ha-807463-m02"
	I1009 19:27:39.975756  343307 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:27:39.975761  343307 fix.go:55] fixHost starting: m02
	I1009 19:27:39.976050  343307 cli_runner.go:164] Run: docker container inspect ha-807463-m02 --format={{.State.Status}}
	I1009 19:27:40.012164  343307 fix.go:113] recreateIfNeeded on ha-807463-m02: state=Stopped err=<nil>
	W1009 19:27:40.012195  343307 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:27:40.015441  343307 out.go:252] * Restarting existing docker container for "ha-807463-m02" ...
	I1009 19:27:40.015539  343307 cli_runner.go:164] Run: docker start ha-807463-m02
	I1009 19:27:40.410002  343307 cli_runner.go:164] Run: docker container inspect ha-807463-m02 --format={{.State.Status}}
	I1009 19:27:40.445455  343307 kic.go:430] container "ha-807463-m02" state is running.
	I1009 19:27:40.445851  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m02
	I1009 19:27:40.474228  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:40.474476  343307 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:40.474538  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:40.505891  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:40.506192  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1009 19:27:40.506201  343307 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:40.506929  343307 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41996->127.0.0.1:33186: read: connection reset by peer
	I1009 19:27:43.729947  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463-m02
	
	I1009 19:27:43.729974  343307 ubuntu.go:182] provisioning hostname "ha-807463-m02"
	I1009 19:27:43.730046  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:43.750597  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:43.750914  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1009 19:27:43.750934  343307 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-807463-m02 && echo "ha-807463-m02" | sudo tee /etc/hostname
	I1009 19:27:44.042915  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463-m02
	
	I1009 19:27:44.043000  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:44.070967  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:44.071275  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1009 19:27:44.071306  343307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-807463-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-807463-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-807463-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:44.341979  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:44.342008  343307 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 19:27:44.342024  343307 ubuntu.go:190] setting up certificates
	I1009 19:27:44.342039  343307 provision.go:84] configureAuth start
	I1009 19:27:44.342104  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m02
	I1009 19:27:44.370782  343307 provision.go:143] copyHostCerts
	I1009 19:27:44.370832  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:27:44.370866  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 19:27:44.370878  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:27:44.370961  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 19:27:44.371063  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:27:44.371087  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 19:27:44.371095  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:27:44.371128  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 19:27:44.371178  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:27:44.371200  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 19:27:44.371210  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:27:44.371237  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 19:27:44.371335  343307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.ha-807463-m02 san=[127.0.0.1 192.168.49.3 ha-807463-m02 localhost minikube]
	I1009 19:27:45.671497  343307 provision.go:177] copyRemoteCerts
	I1009 19:27:45.671655  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:45.671727  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:45.689990  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:45.879571  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:45.879633  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:45.934252  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:45.934317  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:27:46.015412  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:46.015492  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:27:46.095867  343307 provision.go:87] duration metric: took 1.753810196s to configureAuth
	I1009 19:27:46.095898  343307 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:46.096158  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:46.096279  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:46.134871  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.135193  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1009 19:27:46.135215  343307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:47.743001  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:47.743025  343307 machine.go:96] duration metric: took 7.268539709s to provisionDockerMachine
	I1009 19:27:47.743037  343307 start.go:294] postStartSetup for "ha-807463-m02" (driver="docker")
	I1009 19:27:47.743048  343307 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:47.743114  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:47.743178  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:47.763602  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:47.878489  343307 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:47.882311  343307 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:47.882390  343307 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:47.882425  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 19:27:47.882513  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 19:27:47.882649  343307 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 19:27:47.882678  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /etc/ssl/certs/2960022.pem
	I1009 19:27:47.882829  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:47.895445  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:27:47.923753  343307 start.go:297] duration metric: took 180.689414ms for postStartSetup
	I1009 19:27:47.923906  343307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:47.923987  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:47.943574  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:48.072414  343307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:48.090538  343307 fix.go:57] duration metric: took 8.114767256s for fixHost
	I1009 19:27:48.090623  343307 start.go:84] releasing machines lock for "ha-807463-m02", held for 8.114877188s
	I1009 19:27:48.090728  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m02
	I1009 19:27:48.124084  343307 out.go:179] * Found network options:
	I1009 19:27:48.127431  343307 out.go:179]   - NO_PROXY=192.168.49.2
	W1009 19:27:48.131026  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	W1009 19:27:48.131071  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	I1009 19:27:48.131145  343307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:48.131185  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:48.131442  343307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:48.131511  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:48.169238  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:48.169825  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:48.682814  343307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:48.688162  343307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:48.688239  343307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:48.699171  343307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:27:48.699193  343307 start.go:496] detecting cgroup driver to use...
	I1009 19:27:48.699225  343307 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:27:48.699282  343307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:48.728026  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:48.752647  343307 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:48.752765  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:48.774861  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:48.799117  343307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:49.042961  343307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:49.283614  343307 docker.go:234] disabling docker service ...
	I1009 19:27:49.283734  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:49.307987  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:49.328204  343307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:49.580623  343307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:49.895453  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:49.919339  343307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:49.947539  343307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:49.947656  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.962511  343307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:27:49.962650  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.979924  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.995805  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:50.007931  343307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:50.028218  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:50.068031  343307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:50.096196  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:50.122544  343307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:50.151110  343307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:50.173303  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:50.489690  343307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:50.773593  343307 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:50.773686  343307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:50.777653  343307 start.go:564] Will wait 60s for crictl version
	I1009 19:27:50.777737  343307 ssh_runner.go:195] Run: which crictl
	I1009 19:27:50.781240  343307 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:50.810791  343307 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:50.810938  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:27:50.840800  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:27:50.876670  343307 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:50.879670  343307 out.go:179]   - env NO_PROXY=192.168.49.2
	I1009 19:27:50.882673  343307 cli_runner.go:164] Run: docker network inspect ha-807463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:50.898864  343307 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:50.902801  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:50.912892  343307 mustload.go:65] Loading cluster: ha-807463
	I1009 19:27:50.913185  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:50.913459  343307 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:27:50.931384  343307 host.go:66] Checking if "ha-807463" exists ...
	I1009 19:27:50.931675  343307 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463 for IP: 192.168.49.3
	I1009 19:27:50.931689  343307 certs.go:195] generating shared ca certs ...
	I1009 19:27:50.931705  343307 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.931837  343307 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 19:27:50.931898  343307 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 19:27:50.931911  343307 certs.go:257] generating profile certs ...
	I1009 19:27:50.931992  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key
	I1009 19:27:50.932059  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.0cec3fb8
	I1009 19:27:50.932139  343307 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key
	I1009 19:27:50.932153  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:50.932166  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:50.932181  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:50.932192  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:50.932209  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:50.932226  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:50.932242  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:50.932253  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:50.932306  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 19:27:50.932342  343307 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:50.932355  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:27:50.932378  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:50.932408  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:50.932435  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 19:27:50.932481  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:27:50.932513  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem -> /usr/share/ca-certificates/296002.pem
	I1009 19:27:50.932528  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /usr/share/ca-certificates/2960022.pem
	I1009 19:27:50.932539  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.932602  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:50.949747  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:51.053408  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1009 19:27:51.057364  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1009 19:27:51.066242  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1009 19:27:51.070160  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1009 19:27:51.082531  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1009 19:27:51.086523  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1009 19:27:51.095670  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1009 19:27:51.099538  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1009 19:27:51.108444  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1009 19:27:51.112383  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1009 19:27:51.121230  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1009 19:27:51.126634  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1009 19:27:51.135934  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:51.157827  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:51.177909  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:51.208380  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:27:51.233729  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 19:27:51.254881  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:27:51.273448  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:51.293146  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:27:51.312924  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 19:27:51.335482  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 19:27:51.355302  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:51.375754  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1009 19:27:51.391115  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1009 19:27:51.404527  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1009 19:27:51.418174  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1009 19:27:51.431794  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1009 19:27:51.445219  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1009 19:27:51.460138  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1009 19:27:51.473336  343307 ssh_runner.go:195] Run: openssl version
	I1009 19:27:51.480063  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 19:27:51.488916  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 19:27:51.493541  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 19:27:51.493662  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 19:27:51.535043  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:51.543247  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 19:27:51.552252  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 19:27:51.556439  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 19:27:51.556553  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 19:27:51.598587  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:51.607271  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:51.616125  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:51.620083  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:51.620175  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:51.664070  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:51.672785  343307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:51.676884  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:27:51.718930  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:27:51.761150  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:27:51.802284  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:27:51.843422  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:27:51.890388  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:27:51.931465  343307 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1009 19:27:51.931643  343307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-807463-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:51.931677  343307 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:51.931730  343307 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:51.945085  343307 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:51.945174  343307 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:51.945236  343307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:51.955208  343307 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:51.955321  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1009 19:27:51.963468  343307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 19:27:51.977048  343307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:51.990708  343307 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1009 19:27:52.008521  343307 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:52.012741  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:52.024091  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:52.162593  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:52.176738  343307 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:52.177297  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:52.180462  343307 out.go:179] * Verifying Kubernetes components...
	I1009 19:27:52.183354  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:52.328633  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:52.343053  343307 kapi.go:59] client config for ha-807463: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key", CAFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1009 19:27:52.343132  343307 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1009 19:27:52.343378  343307 node_ready.go:35] waiting up to 6m0s for node "ha-807463-m02" to be "Ready" ...
	I1009 19:28:12.417047  343307 node_ready.go:49] node "ha-807463-m02" is "Ready"
	I1009 19:28:12.417075  343307 node_ready.go:38] duration metric: took 20.07367073s for node "ha-807463-m02" to be "Ready" ...
	I1009 19:28:12.417087  343307 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:28:12.417171  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:12.917913  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:13.418163  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:13.917283  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:14.417776  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:14.441559  343307 api_server.go:72] duration metric: took 22.264725667s to wait for apiserver process to appear ...
	I1009 19:28:14.441582  343307 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:28:14.441601  343307 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1009 19:28:14.457402  343307 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1009 19:28:14.458648  343307 api_server.go:141] control plane version: v1.34.1
	I1009 19:28:14.458703  343307 api_server.go:131] duration metric: took 17.113274ms to wait for apiserver health ...
	I1009 19:28:14.458728  343307 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:28:14.470395  343307 system_pods.go:59] 26 kube-system pods found
	I1009 19:28:14.470439  343307 system_pods.go:61] "coredns-66bc5c9577-tswbs" [5837c6fe-278a-4b3a-98d1-79992fe9ea08] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:28:14.470449  343307 system_pods.go:61] "coredns-66bc5c9577-vkzgf" [80c50dd0-6a2c-4662-80d3-72f45754c3df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:28:14.470454  343307 system_pods.go:61] "etcd-ha-807463" [84964141-cf31-4652-9a3c-a9265edf4f8d] Running
	I1009 19:28:14.470459  343307 system_pods.go:61] "etcd-ha-807463-m02" [e91cfd04-5988-45ce-9dae-b204db6efe4e] Running
	I1009 19:28:14.470464  343307 system_pods.go:61] "etcd-ha-807463-m03" [26cd4bca-fd69-452f-b5a2-b9bbc5966ded] Running
	I1009 19:28:14.470473  343307 system_pods.go:61] "kindnet-bc8tf" [f003f127-5e25-434a-837b-d021fb0e3fa7] Running
	I1009 19:28:14.470477  343307 system_pods.go:61] "kindnet-dvwc7" [2a7512ff-e63c-4aa0-8b4e-fb241415067f] Running
	I1009 19:28:14.470483  343307 system_pods.go:61] "kindnet-gvpmq" [223d0c34-5384-4cd5-a0d2-842a422629ab] Running
	I1009 19:28:14.470488  343307 system_pods.go:61] "kindnet-rc46j" [22f58fe4-1d11-4259-b9f9-e8740b8b2257] Running
	I1009 19:28:14.470501  343307 system_pods.go:61] "kube-apiserver-ha-807463" [f6f353e4-8237-46db-a4a8-cd536448a79b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:28:14.470507  343307 system_pods.go:61] "kube-apiserver-ha-807463-m02" [3d8c0d4b-2cfb-4de6-8d9f-95e25e6f2a4e] Running
	I1009 19:28:14.470517  343307 system_pods.go:61] "kube-apiserver-ha-807463-m03" [a7b828f8-ab95-440a-b42e-e48d83bf3d20] Running
	I1009 19:28:14.470521  343307 system_pods.go:61] "kube-controller-manager-ha-807463" [e409b5f4-e73e-4270-bc1b-44b9a84123c7] Running
	I1009 19:28:14.470527  343307 system_pods.go:61] "kube-controller-manager-ha-807463-m02" [bce8c53d-0ba9-4e5f-93ca-06958824d9ba] Running
	I1009 19:28:14.470538  343307 system_pods.go:61] "kube-controller-manager-ha-807463-m03" [96d81c2f-668e-4729-aa2c-ab008af31ef1] Running
	I1009 19:28:14.470542  343307 system_pods.go:61] "kube-proxy-2lp2p" [cb605c64-8004-4f40-8e70-eb8e3184d3d6] Running
	I1009 19:28:14.470546  343307 system_pods.go:61] "kube-proxy-7lpbk" [d6ba71bf-d06d-4ade-b0e4-85303842110c] Running
	I1009 19:28:14.470550  343307 system_pods.go:61] "kube-proxy-b84dn" [9c10ee5e-8408-4b6f-985a-8d4f44a869cc] Running
	I1009 19:28:14.470555  343307 system_pods.go:61] "kube-proxy-vw7c5" [89df419c-841c-4a9c-af83-50e98327318d] Running
	I1009 19:28:14.470561  343307 system_pods.go:61] "kube-scheduler-ha-807463" [d577e200-00d6-4bac-aa67-0f7ef54c4d1a] Running
	I1009 19:28:14.470568  343307 system_pods.go:61] "kube-scheduler-ha-807463-m02" [848b94f3-79dc-44dc-8416-33c96451e0c0] Running
	I1009 19:28:14.470572  343307 system_pods.go:61] "kube-scheduler-ha-807463-m03" [f7153dac-0ede-40dc-b18c-1c03bebc8414] Running
	I1009 19:28:14.470578  343307 system_pods.go:61] "kube-vip-ha-807463" [f4f09ea9-0059-4cc4-9c0b-0ea2240a1885] Running
	I1009 19:28:14.470583  343307 system_pods.go:61] "kube-vip-ha-807463-m02" [98f28358-d9e9-4f8a-b407-b14baa34ea75] Running
	I1009 19:28:14.470589  343307 system_pods.go:61] "kube-vip-ha-807463-m03" [c150d4cd-1c28-4677-9a55-6e2d119daa81] Running
	I1009 19:28:14.470594  343307 system_pods.go:61] "storage-provisioner" [b9e8a81e-2bee-4542-b231-7490dfbf6065] Running
	I1009 19:28:14.470599  343307 system_pods.go:74] duration metric: took 11.85336ms to wait for pod list to return data ...
	I1009 19:28:14.470612  343307 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:28:14.482492  343307 default_sa.go:45] found service account: "default"
	I1009 19:28:14.482522  343307 default_sa.go:55] duration metric: took 11.902296ms for default service account to be created ...
	I1009 19:28:14.482532  343307 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:28:14.496415  343307 system_pods.go:86] 26 kube-system pods found
	I1009 19:28:14.496458  343307 system_pods.go:89] "coredns-66bc5c9577-tswbs" [5837c6fe-278a-4b3a-98d1-79992fe9ea08] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:28:14.496468  343307 system_pods.go:89] "coredns-66bc5c9577-vkzgf" [80c50dd0-6a2c-4662-80d3-72f45754c3df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:28:14.496475  343307 system_pods.go:89] "etcd-ha-807463" [84964141-cf31-4652-9a3c-a9265edf4f8d] Running
	I1009 19:28:14.496480  343307 system_pods.go:89] "etcd-ha-807463-m02" [e91cfd04-5988-45ce-9dae-b204db6efe4e] Running
	I1009 19:28:14.496484  343307 system_pods.go:89] "etcd-ha-807463-m03" [26cd4bca-fd69-452f-b5a2-b9bbc5966ded] Running
	I1009 19:28:14.496488  343307 system_pods.go:89] "kindnet-bc8tf" [f003f127-5e25-434a-837b-d021fb0e3fa7] Running
	I1009 19:28:14.496493  343307 system_pods.go:89] "kindnet-dvwc7" [2a7512ff-e63c-4aa0-8b4e-fb241415067f] Running
	I1009 19:28:14.496502  343307 system_pods.go:89] "kindnet-gvpmq" [223d0c34-5384-4cd5-a0d2-842a422629ab] Running
	I1009 19:28:14.496509  343307 system_pods.go:89] "kindnet-rc46j" [22f58fe4-1d11-4259-b9f9-e8740b8b2257] Running
	I1009 19:28:14.496517  343307 system_pods.go:89] "kube-apiserver-ha-807463" [f6f353e4-8237-46db-a4a8-cd536448a79b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:28:14.496523  343307 system_pods.go:89] "kube-apiserver-ha-807463-m02" [3d8c0d4b-2cfb-4de6-8d9f-95e25e6f2a4e] Running
	I1009 19:28:14.496534  343307 system_pods.go:89] "kube-apiserver-ha-807463-m03" [a7b828f8-ab95-440a-b42e-e48d83bf3d20] Running
	I1009 19:28:14.496539  343307 system_pods.go:89] "kube-controller-manager-ha-807463" [e409b5f4-e73e-4270-bc1b-44b9a84123c7] Running
	I1009 19:28:14.496544  343307 system_pods.go:89] "kube-controller-manager-ha-807463-m02" [bce8c53d-0ba9-4e5f-93ca-06958824d9ba] Running
	I1009 19:28:14.496553  343307 system_pods.go:89] "kube-controller-manager-ha-807463-m03" [96d81c2f-668e-4729-aa2c-ab008af31ef1] Running
	I1009 19:28:14.496557  343307 system_pods.go:89] "kube-proxy-2lp2p" [cb605c64-8004-4f40-8e70-eb8e3184d3d6] Running
	I1009 19:28:14.496561  343307 system_pods.go:89] "kube-proxy-7lpbk" [d6ba71bf-d06d-4ade-b0e4-85303842110c] Running
	I1009 19:28:14.496566  343307 system_pods.go:89] "kube-proxy-b84dn" [9c10ee5e-8408-4b6f-985a-8d4f44a869cc] Running
	I1009 19:28:14.496575  343307 system_pods.go:89] "kube-proxy-vw7c5" [89df419c-841c-4a9c-af83-50e98327318d] Running
	I1009 19:28:14.496579  343307 system_pods.go:89] "kube-scheduler-ha-807463" [d577e200-00d6-4bac-aa67-0f7ef54c4d1a] Running
	I1009 19:28:14.496583  343307 system_pods.go:89] "kube-scheduler-ha-807463-m02" [848b94f3-79dc-44dc-8416-33c96451e0c0] Running
	I1009 19:28:14.496587  343307 system_pods.go:89] "kube-scheduler-ha-807463-m03" [f7153dac-0ede-40dc-b18c-1c03bebc8414] Running
	I1009 19:28:14.496591  343307 system_pods.go:89] "kube-vip-ha-807463" [f4f09ea9-0059-4cc4-9c0b-0ea2240a1885] Running
	I1009 19:28:14.496597  343307 system_pods.go:89] "kube-vip-ha-807463-m02" [98f28358-d9e9-4f8a-b407-b14baa34ea75] Running
	I1009 19:28:14.496601  343307 system_pods.go:89] "kube-vip-ha-807463-m03" [c150d4cd-1c28-4677-9a55-6e2d119daa81] Running
	I1009 19:28:14.496609  343307 system_pods.go:89] "storage-provisioner" [b9e8a81e-2bee-4542-b231-7490dfbf6065] Running
	I1009 19:28:14.496616  343307 system_pods.go:126] duration metric: took 14.078508ms to wait for k8s-apps to be running ...
	I1009 19:28:14.496627  343307 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:28:14.496696  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:28:14.527254  343307 system_svc.go:56] duration metric: took 30.616666ms WaitForService to wait for kubelet
	I1009 19:28:14.527281  343307 kubeadm.go:586] duration metric: took 22.350452667s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:28:14.527300  343307 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:28:14.536047  343307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:28:14.536130  343307 node_conditions.go:123] node cpu capacity is 2
	I1009 19:28:14.536159  343307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:28:14.536184  343307 node_conditions.go:123] node cpu capacity is 2
	I1009 19:28:14.536225  343307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:28:14.536247  343307 node_conditions.go:123] node cpu capacity is 2
	I1009 19:28:14.536284  343307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:28:14.536308  343307 node_conditions.go:123] node cpu capacity is 2
	I1009 19:28:14.536330  343307 node_conditions.go:105] duration metric: took 9.020752ms to run NodePressure ...
	I1009 19:28:14.536373  343307 start.go:242] waiting for startup goroutines ...
	I1009 19:28:14.536414  343307 start.go:256] writing updated cluster config ...
	I1009 19:28:14.540247  343307 out.go:203] 
	I1009 19:28:14.543487  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:28:14.543686  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:28:14.547047  343307 out.go:179] * Starting "ha-807463-m03" control-plane node in "ha-807463" cluster
	I1009 19:28:14.550723  343307 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:28:14.553769  343307 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:28:14.556767  343307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:28:14.556832  343307 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:28:14.557073  343307 cache.go:58] Caching tarball of preloaded images
	I1009 19:28:14.557216  343307 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:28:14.557276  343307 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:28:14.557431  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:28:14.597092  343307 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:28:14.597123  343307 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:28:14.597144  343307 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:28:14.597168  343307 start.go:361] acquireMachinesLock for ha-807463-m03: {Name:mk0e43107ec0c9bc8c06da921397f514d91f61d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:28:14.597229  343307 start.go:365] duration metric: took 46.457µs to acquireMachinesLock for "ha-807463-m03"
	I1009 19:28:14.597250  343307 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:28:14.597255  343307 fix.go:55] fixHost starting: m03
	I1009 19:28:14.597512  343307 cli_runner.go:164] Run: docker container inspect ha-807463-m03 --format={{.State.Status}}
	I1009 19:28:14.632017  343307 fix.go:113] recreateIfNeeded on ha-807463-m03: state=Stopped err=<nil>
	W1009 19:28:14.632042  343307 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:28:14.635426  343307 out.go:252] * Restarting existing docker container for "ha-807463-m03" ...
	I1009 19:28:14.635514  343307 cli_runner.go:164] Run: docker start ha-807463-m03
	I1009 19:28:15.014352  343307 cli_runner.go:164] Run: docker container inspect ha-807463-m03 --format={{.State.Status}}
	I1009 19:28:15.044342  343307 kic.go:430] container "ha-807463-m03" state is running.
	I1009 19:28:15.044802  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m03
	I1009 19:28:15.084035  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:28:15.084294  343307 machine.go:93] provisionDockerMachine start ...
	I1009 19:28:15.084356  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:15.113499  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:28:15.113819  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33191 <nil> <nil>}
	I1009 19:28:15.113829  343307 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:28:15.114606  343307 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 19:28:18.387326  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463-m03
	
	I1009 19:28:18.387353  343307 ubuntu.go:182] provisioning hostname "ha-807463-m03"
	I1009 19:28:18.387421  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:18.414941  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:28:18.415269  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33191 <nil> <nil>}
	I1009 19:28:18.415288  343307 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-807463-m03 && echo "ha-807463-m03" | sudo tee /etc/hostname
	I1009 19:28:18.857505  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463-m03
	
	I1009 19:28:18.857586  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:18.886274  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:28:18.886587  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33191 <nil> <nil>}
	I1009 19:28:18.886603  343307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-807463-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-807463-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-807463-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:28:19.124493  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:28:19.124522  343307 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 19:28:19.124543  343307 ubuntu.go:190] setting up certificates
	I1009 19:28:19.124552  343307 provision.go:84] configureAuth start
	I1009 19:28:19.124639  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m03
	I1009 19:28:19.150744  343307 provision.go:143] copyHostCerts
	I1009 19:28:19.150791  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:28:19.150823  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 19:28:19.150839  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:28:19.150921  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 19:28:19.151006  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:28:19.151029  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 19:28:19.151037  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:28:19.151079  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 19:28:19.151132  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:28:19.151154  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 19:28:19.151159  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:28:19.151184  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 19:28:19.151236  343307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.ha-807463-m03 san=[127.0.0.1 192.168.49.4 ha-807463-m03 localhost minikube]
	I1009 19:28:20.594319  343307 provision.go:177] copyRemoteCerts
	I1009 19:28:20.594391  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:28:20.594445  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:20.617127  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:20.793603  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:28:20.793667  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:28:20.838358  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:28:20.838425  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:28:20.897009  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:28:20.897076  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:28:20.947823  343307 provision.go:87] duration metric: took 1.823247487s to configureAuth
	I1009 19:28:20.947854  343307 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:28:20.948102  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:28:20.948220  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:20.980853  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:28:20.981192  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33191 <nil> <nil>}
	I1009 19:28:20.981215  343307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:28:21.547892  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:28:21.547940  343307 machine.go:96] duration metric: took 6.463636002s to provisionDockerMachine
	I1009 19:28:21.547953  343307 start.go:294] postStartSetup for "ha-807463-m03" (driver="docker")
	I1009 19:28:21.547963  343307 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:28:21.548058  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:28:21.548103  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:21.574619  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:21.688699  343307 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:28:21.693344  343307 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:28:21.693371  343307 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:28:21.693382  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 19:28:21.693440  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 19:28:21.693513  343307 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 19:28:21.693520  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /etc/ssl/certs/2960022.pem
	I1009 19:28:21.693621  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:28:21.703022  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:28:21.726060  343307 start.go:297] duration metric: took 178.090392ms for postStartSetup
	I1009 19:28:21.726183  343307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:28:21.726252  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:21.754232  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:21.887060  343307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:28:21.902692  343307 fix.go:57] duration metric: took 7.305428838s for fixHost
	I1009 19:28:21.902721  343307 start.go:84] releasing machines lock for "ha-807463-m03", held for 7.305481549s
	I1009 19:28:21.902791  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m03
	I1009 19:28:21.935444  343307 out.go:179] * Found network options:
	I1009 19:28:21.938464  343307 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1009 19:28:21.941326  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	W1009 19:28:21.941366  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	W1009 19:28:21.941390  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	W1009 19:28:21.941399  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	I1009 19:28:21.941489  343307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:28:21.941533  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:21.941553  343307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:28:21.941612  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:21.971654  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:21.991268  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:22.521550  343307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:28:22.531247  343307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:28:22.531361  343307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:28:22.554768  343307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:28:22.554843  343307 start.go:496] detecting cgroup driver to use...
	I1009 19:28:22.554892  343307 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:28:22.554962  343307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:28:22.583220  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:28:22.599310  343307 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:28:22.599403  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:28:22.632291  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:28:22.653641  343307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:28:23.037548  343307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:28:23.288869  343307 docker.go:234] disabling docker service ...
	I1009 19:28:23.288983  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:28:23.316355  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:28:23.341879  343307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:28:23.636459  343307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:28:23.958882  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:28:24.002025  343307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:28:24.060081  343307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:28:24.060153  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.094554  343307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:28:24.094632  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.113879  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.124444  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.134135  343307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:28:24.153071  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.164683  343307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.175420  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.185724  343307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:28:24.196010  343307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:28:24.206389  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:28:24.403396  343307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:29:54.625257  343307 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.2217701s)
	I1009 19:29:54.625289  343307 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:29:54.625347  343307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:29:54.629422  343307 start.go:564] Will wait 60s for crictl version
	I1009 19:29:54.629487  343307 ssh_runner.go:195] Run: which crictl
	I1009 19:29:54.633348  343307 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:29:54.664178  343307 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:29:54.664263  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:29:54.695047  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:29:54.726968  343307 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:29:54.729882  343307 out.go:179]   - env NO_PROXY=192.168.49.2
	I1009 19:29:54.732783  343307 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1009 19:29:54.735745  343307 cli_runner.go:164] Run: docker network inspect ha-807463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:29:54.754488  343307 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:29:54.758549  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:29:54.769025  343307 mustload.go:65] Loading cluster: ha-807463
	I1009 19:29:54.769312  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:29:54.769581  343307 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:29:54.789308  343307 host.go:66] Checking if "ha-807463" exists ...
	I1009 19:29:54.789631  343307 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463 for IP: 192.168.49.4
	I1009 19:29:54.789648  343307 certs.go:195] generating shared ca certs ...
	I1009 19:29:54.789665  343307 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:29:54.789790  343307 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 19:29:54.789840  343307 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 19:29:54.789852  343307 certs.go:257] generating profile certs ...
	I1009 19:29:54.789935  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key
	I1009 19:29:54.790005  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.8f59bad3
	I1009 19:29:54.790050  343307 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key
	I1009 19:29:54.790063  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:29:54.790075  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:29:54.790096  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:29:54.790112  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:29:54.790124  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:29:54.790141  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:29:54.790152  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:29:54.790162  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:29:54.790217  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 19:29:54.790247  343307 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 19:29:54.790255  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:29:54.790279  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:29:54.790304  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:29:54.790325  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 19:29:54.790366  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:29:54.790392  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /usr/share/ca-certificates/2960022.pem
	I1009 19:29:54.790404  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:29:54.790415  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem -> /usr/share/ca-certificates/296002.pem
	I1009 19:29:54.790566  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:29:54.807723  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:29:54.905478  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1009 19:29:54.915115  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1009 19:29:54.924123  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1009 19:29:54.927867  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1009 19:29:54.936366  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1009 19:29:54.940038  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1009 19:29:54.948153  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1009 19:29:54.952558  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1009 19:29:54.962178  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1009 19:29:54.966425  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1009 19:29:54.974761  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1009 19:29:54.978501  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1009 19:29:54.987786  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:29:55.037480  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:29:55.060963  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:29:55.082145  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:29:55.105188  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 19:29:55.128516  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:29:55.149252  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:29:55.172354  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:29:55.193857  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 19:29:55.219080  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:29:55.237634  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 19:29:55.256720  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1009 19:29:55.279349  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1009 19:29:55.298083  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1009 19:29:55.312857  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1009 19:29:55.328467  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1009 19:29:55.343367  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1009 19:29:55.357598  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1009 19:29:55.374321  343307 ssh_runner.go:195] Run: openssl version
	I1009 19:29:55.380839  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 19:29:55.389522  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 19:29:55.394545  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 19:29:55.394618  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 19:29:55.437345  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:29:55.447436  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:29:55.456198  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:29:55.460194  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:29:55.460288  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:29:55.502457  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:29:55.511155  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 19:29:55.519603  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 19:29:55.523571  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 19:29:55.523682  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 19:29:55.565661  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 19:29:55.575332  343307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:29:55.579545  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:29:55.620938  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:29:55.663052  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:29:55.708075  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:29:55.749078  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:29:55.800791  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:29:55.844259  343307 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1009 19:29:55.844433  343307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-807463-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:29:55.844463  343307 kube-vip.go:115] generating kube-vip config ...
	I1009 19:29:55.844514  343307 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:29:55.857076  343307 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:29:55.857168  343307 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:29:55.857232  343307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:29:55.865620  343307 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:29:55.865690  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1009 19:29:55.873976  343307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 19:29:55.888496  343307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:29:55.902132  343307 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1009 19:29:55.918614  343307 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:29:55.922408  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:29:55.932872  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:29:56.078754  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:29:56.098490  343307 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:29:56.098835  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:29:56.102467  343307 out.go:179] * Verifying Kubernetes components...
	I1009 19:29:56.105295  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:29:56.244415  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:29:56.260645  343307 kapi.go:59] client config for ha-807463: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key", CAFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1009 19:29:56.260766  343307 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1009 19:29:56.261025  343307 node_ready.go:35] waiting up to 6m0s for node "ha-807463-m03" to be "Ready" ...
	W1009 19:29:58.265441  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:00.338043  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:02.766376  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:05.271576  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:07.765013  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:09.766174  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:12.268909  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:14.764872  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:16.768216  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:19.265861  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:21.764655  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:23.765433  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:26.265822  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:28.267509  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:30.765442  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:33.266200  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:35.765625  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:38.265302  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:40.265407  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:42.270313  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:44.765053  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:47.264227  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:49.264310  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:51.264693  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:53.266262  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:55.765430  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:57.765657  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:00.296961  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:02.765162  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:05.265758  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:07.270661  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:09.764829  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:11.766346  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:14.265615  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:16.765212  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:19.264362  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:21.265737  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:23.765070  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:26.265524  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:28.764786  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:30.765098  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:33.265489  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:35.270526  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:37.764838  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:40.265487  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:42.765053  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:45.269843  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:47.765589  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:49.766098  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:52.274275  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:54.765171  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:57.265540  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:59.265763  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:01.270860  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:03.765024  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:06.265424  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:08.766290  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:10.766762  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:13.264661  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:15.265789  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:17.765441  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:19.765504  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:22.269835  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:24.764880  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:26.764993  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:28.765201  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:30.765672  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:33.269831  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:35.271203  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:37.764975  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:39.765423  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:42.271235  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:44.765366  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:47.264895  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:49.267101  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:51.764961  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:53.765546  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:55.765910  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:58.272156  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:00.765521  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:03.265015  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:05.265319  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:07.764930  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:09.765819  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:12.270731  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:14.764917  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:16.765423  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:19.265783  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:21.268655  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:23.764590  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:25.765798  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:28.266110  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:30.765102  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:33.272016  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:35.765481  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:38.266269  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:40.268920  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:42.764575  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:44.765157  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:47.271446  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:49.764820  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:51.765204  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:54.271178  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:56.765244  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:59.264746  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:01.265757  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:03.266309  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:05.765330  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:08.271832  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:10.764901  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:13.271000  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:15.764750  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:18.271187  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:20.764309  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:22.764554  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:24.765015  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:27.265491  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:29.269747  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:31.765383  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:34.265977  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:36.271158  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:38.764726  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:41.269997  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:43.765647  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:46.264806  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:48.264841  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:50.265171  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:52.273405  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:54.764904  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:56.772617  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:59.264570  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:01.266121  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:03.764578  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:05.765062  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:07.765743  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:10.264753  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:12.267514  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:14.271366  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:16.764238  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:18.764646  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:21.264582  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:23.765647  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:26.265493  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:28.765534  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:31.266108  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:33.271209  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:35.765495  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:38.264544  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:40.265777  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:42.765010  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:45.320159  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:47.765477  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:50.267171  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:52.764971  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:54.765424  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	I1009 19:35:56.261403  343307 node_ready.go:38] duration metric: took 6m0.00032425s for node "ha-807463-m03" to be "Ready" ...
	I1009 19:35:56.264406  343307 out.go:203] 
	W1009 19:35:56.267318  343307 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:35:56.267354  343307 out.go:285] * 
	* 
	W1009 19:35:56.269757  343307 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:35:56.272075  343307 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-807463 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-807463
helpers_test.go:243: (dbg) docker inspect ha-807463:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6",
	        "Created": "2025-10-09T19:22:12.218448558Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 343436,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:27:31.498701729Z",
	            "FinishedAt": "2025-10-09T19:27:30.881285461Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/hostname",
	        "HostsPath": "/var/lib/docker/containers/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/hosts",
	        "LogPath": "/var/lib/docker/containers/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6-json.log",
	        "Name": "/ha-807463",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-807463:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-807463",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6",
	                "LowerDir": "/var/lib/docker/overlay2/501f3dc17989cbf113e3e1d86a2dc5dbf4a1ebf96c1051617a1e82e0c118ddb2-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/501f3dc17989cbf113e3e1d86a2dc5dbf4a1ebf96c1051617a1e82e0c118ddb2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/501f3dc17989cbf113e3e1d86a2dc5dbf4a1ebf96c1051617a1e82e0c118ddb2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/501f3dc17989cbf113e3e1d86a2dc5dbf4a1ebf96c1051617a1e82e0c118ddb2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-807463",
	                "Source": "/var/lib/docker/volumes/ha-807463/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-807463",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-807463",
	                "name.minikube.sigs.k8s.io": "ha-807463",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "519a0261c9568d4a6f9cab4a02626789b917d4097449bf7d122da62e1553ad90",
	            "SandboxKey": "/var/run/docker/netns/519a0261c956",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-807463": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:d7:45:51:f4:8a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3847a657768484ae039efdd09e2b590403676178eb4c67c06a2221fe144c70b7",
	                    "EndpointID": "1be139014228dabc7add444f5a4d8325f46a753a08b0696634c3bb797577acd0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-807463",
	                        "fea8f67be9d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-807463 -n ha-807463
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-807463 logs -n 25: (1.579117842s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-807463 cp ha-807463-m03:/home/docker/cp-test.txt ha-807463-m02:/home/docker/cp-test_ha-807463-m03_ha-807463-m02.txt               │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m02 sudo cat /home/docker/cp-test_ha-807463-m03_ha-807463-m02.txt                                         │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp ha-807463-m03:/home/docker/cp-test.txt ha-807463-m04:/home/docker/cp-test_ha-807463-m03_ha-807463-m04.txt               │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test_ha-807463-m03_ha-807463-m04.txt                                         │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp testdata/cp-test.txt ha-807463-m04:/home/docker/cp-test.txt                                                             │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp ha-807463-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1218422779/001/cp-test_ha-807463-m04.txt │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp ha-807463-m04:/home/docker/cp-test.txt ha-807463:/home/docker/cp-test_ha-807463-m04_ha-807463.txt                       │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463 sudo cat /home/docker/cp-test_ha-807463-m04_ha-807463.txt                                                 │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp ha-807463-m04:/home/docker/cp-test.txt ha-807463-m02:/home/docker/cp-test_ha-807463-m04_ha-807463-m02.txt               │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m02 sudo cat /home/docker/cp-test_ha-807463-m04_ha-807463-m02.txt                                         │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp ha-807463-m04:/home/docker/cp-test.txt ha-807463-m03:/home/docker/cp-test_ha-807463-m04_ha-807463-m03.txt               │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m03 sudo cat /home/docker/cp-test_ha-807463-m04_ha-807463-m03.txt                                         │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ node    │ ha-807463 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ node    │ ha-807463 node start m02 --alsologtostderr -v 5                                                                                      │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:27 UTC │
	│ node    │ ha-807463 node list --alsologtostderr -v 5                                                                                           │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │                     │
	│ stop    │ ha-807463 stop --alsologtostderr -v 5                                                                                                │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │ 09 Oct 25 19:27 UTC │
	│ start   │ ha-807463 start --wait true --alsologtostderr -v 5                                                                                   │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │                     │
	│ node    │ ha-807463 node list --alsologtostderr -v 5                                                                                           │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:27:31
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:27:31.218830  343307 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:27:31.218980  343307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:31.218993  343307 out.go:374] Setting ErrFile to fd 2...
	I1009 19:27:31.219013  343307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:31.219307  343307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:27:31.219769  343307 out.go:368] Setting JSON to false
	I1009 19:27:31.220680  343307 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7791,"bootTime":1760030261,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 19:27:31.220751  343307 start.go:143] virtualization:  
	I1009 19:27:31.225902  343307 out.go:179] * [ha-807463] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:27:31.229045  343307 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:27:31.229154  343307 notify.go:221] Checking for updates...
	I1009 19:27:31.235436  343307 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:27:31.238296  343307 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:27:31.241057  343307 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 19:27:31.243947  343307 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:27:31.246781  343307 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:27:31.250030  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:31.250184  343307 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:27:31.286472  343307 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:27:31.286604  343307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:31.343705  343307 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-09 19:27:31.334706362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:27:31.343816  343307 docker.go:319] overlay module found
	I1009 19:27:31.346870  343307 out.go:179] * Using the docker driver based on existing profile
	I1009 19:27:31.349767  343307 start.go:309] selected driver: docker
	I1009 19:27:31.349786  343307 start.go:930] validating driver "docker" against &{Name:ha-807463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:31.349926  343307 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:27:31.350028  343307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:31.412249  343307 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-09 19:27:31.403030574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:27:31.412653  343307 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:27:31.412689  343307 cni.go:84] Creating CNI manager for ""
	I1009 19:27:31.412755  343307 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1009 19:27:31.412799  343307 start.go:353] cluster config:
	{Name:ha-807463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:31.417709  343307 out.go:179] * Starting "ha-807463" primary control-plane node in "ha-807463" cluster
	I1009 19:27:31.420530  343307 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:31.423466  343307 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:31.426321  343307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:31.426392  343307 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:27:31.426406  343307 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:31.426410  343307 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:31.426490  343307 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:27:31.426508  343307 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:31.426650  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:31.445925  343307 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:31.445951  343307 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:31.445969  343307 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:31.446007  343307 start.go:361] acquireMachinesLock for ha-807463: {Name:mk7b03a6b271157d59e205354be444442bc66672 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:31.446069  343307 start.go:365] duration metric: took 41.674µs to acquireMachinesLock for "ha-807463"
	I1009 19:27:31.446095  343307 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:27:31.446101  343307 fix.go:55] fixHost starting: 
	I1009 19:27:31.446358  343307 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:27:31.463339  343307 fix.go:113] recreateIfNeeded on ha-807463: state=Stopped err=<nil>
	W1009 19:27:31.463369  343307 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:27:31.466724  343307 out.go:252] * Restarting existing docker container for "ha-807463" ...
	I1009 19:27:31.466808  343307 cli_runner.go:164] Run: docker start ha-807463
	I1009 19:27:31.729554  343307 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:27:31.752533  343307 kic.go:430] container "ha-807463" state is running.
	I1009 19:27:31.752940  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463
	I1009 19:27:31.776613  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:31.776858  343307 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:31.776933  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:31.798253  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:31.798586  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1009 19:27:31.798603  343307 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:31.799247  343307 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 19:27:34.945362  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463
	
	I1009 19:27:34.945397  343307 ubuntu.go:182] provisioning hostname "ha-807463"
	I1009 19:27:34.945467  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:34.962891  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:34.963208  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1009 19:27:34.963226  343307 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-807463 && echo "ha-807463" | sudo tee /etc/hostname
	I1009 19:27:35.120375  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463
	
	I1009 19:27:35.120459  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:35.138932  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:35.139244  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1009 19:27:35.139259  343307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-807463' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-807463/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-807463' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:35.285402  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:35.285451  343307 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 19:27:35.285478  343307 ubuntu.go:190] setting up certificates
	I1009 19:27:35.285488  343307 provision.go:84] configureAuth start
	I1009 19:27:35.285558  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463
	I1009 19:27:35.302829  343307 provision.go:143] copyHostCerts
	I1009 19:27:35.302873  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:27:35.302904  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 19:27:35.302917  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:27:35.303005  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 19:27:35.303096  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:27:35.303118  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 19:27:35.303127  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:27:35.303156  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 19:27:35.303204  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:27:35.303225  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 19:27:35.303230  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:27:35.303255  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 19:27:35.303308  343307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.ha-807463 san=[127.0.0.1 192.168.49.2 ha-807463 localhost minikube]
	I1009 19:27:35.901224  343307 provision.go:177] copyRemoteCerts
	I1009 19:27:35.901289  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:35.901355  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:35.918214  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.021624  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:36.021693  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:36.040520  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:36.040583  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:27:36.059254  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:36.059315  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:27:36.078084  343307 provision.go:87] duration metric: took 792.56918ms to configureAuth
	I1009 19:27:36.078112  343307 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:36.078344  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:36.078465  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.095675  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:36.095992  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1009 19:27:36.096012  343307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:36.425006  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:36.425081  343307 machine.go:96] duration metric: took 4.648205511s to provisionDockerMachine
	I1009 19:27:36.425141  343307 start.go:294] postStartSetup for "ha-807463" (driver="docker")
	I1009 19:27:36.425177  343307 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:36.425298  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:36.425384  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.449453  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.553510  343307 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:36.557246  343307 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:36.557278  343307 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:36.557290  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 19:27:36.557367  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 19:27:36.557489  343307 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 19:27:36.557501  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /etc/ssl/certs/2960022.pem
	I1009 19:27:36.557607  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:36.565210  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:27:36.583083  343307 start.go:297] duration metric: took 157.903278ms for postStartSetup
	I1009 19:27:36.583210  343307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:36.583282  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.600612  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.698274  343307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:36.703016  343307 fix.go:57] duration metric: took 5.256907577s for fixHost
	I1009 19:27:36.703042  343307 start.go:84] releasing machines lock for "ha-807463", held for 5.256957103s
	I1009 19:27:36.703115  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463
	I1009 19:27:36.720370  343307 ssh_runner.go:195] Run: cat /version.json
	I1009 19:27:36.720385  343307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:36.720422  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.720451  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.743233  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.753326  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.948710  343307 ssh_runner.go:195] Run: systemctl --version
	I1009 19:27:36.955436  343307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:36.994992  343307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:37.001157  343307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:37.001242  343307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:37.015899  343307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:27:37.015931  343307 start.go:496] detecting cgroup driver to use...
	I1009 19:27:37.016002  343307 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:27:37.016099  343307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:37.034350  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:37.049609  343307 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:37.049706  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:37.065757  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:37.079370  343307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:37.204726  343307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:37.324926  343307 docker.go:234] disabling docker service ...
	I1009 19:27:37.325051  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:37.340669  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:37.354186  343307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:37.468499  343307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:37.609321  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:37.623308  343307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:37.638872  343307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:37.638957  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.648255  343307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:27:37.648376  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.658302  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.667181  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.675984  343307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:37.685440  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.694680  343307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.702750  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.711421  343307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:37.719182  343307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:37.727483  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:37.841375  343307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:37.980708  343307 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:37.980812  343307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:37.984807  343307 start.go:564] Will wait 60s for crictl version
	I1009 19:27:37.984933  343307 ssh_runner.go:195] Run: which crictl
	I1009 19:27:37.988572  343307 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:38.021983  343307 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:38.022073  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:27:38.052703  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:27:38.085238  343307 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:38.088088  343307 cli_runner.go:164] Run: docker network inspect ha-807463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:38.104470  343307 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:38.108353  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:38.118588  343307 kubeadm.go:883] updating cluster {Name:ha-807463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:27:38.118741  343307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:38.118810  343307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:38.155316  343307 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:38.155341  343307 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:27:38.155400  343307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:38.184223  343307 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:38.184246  343307 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:27:38.184257  343307 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:27:38.184370  343307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-807463 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:38.184448  343307 ssh_runner.go:195] Run: crio config
	I1009 19:27:38.252414  343307 cni.go:84] Creating CNI manager for ""
	I1009 19:27:38.252436  343307 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1009 19:27:38.252454  343307 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:27:38.252488  343307 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-807463 NodeName:ha-807463 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:27:38.252634  343307 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-807463"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:27:38.252656  343307 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:38.252721  343307 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:38.265014  343307 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:38.265147  343307 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:38.265209  343307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:38.272978  343307 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:38.273096  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:27:38.280861  343307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:27:38.294726  343307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:38.307657  343307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1009 19:27:38.320684  343307 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1009 19:27:38.333393  343307 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:38.337014  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:38.346725  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:38.455808  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:38.472442  343307 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463 for IP: 192.168.49.2
	I1009 19:27:38.472472  343307 certs.go:195] generating shared ca certs ...
	I1009 19:27:38.472489  343307 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:38.472635  343307 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 19:27:38.472702  343307 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 19:27:38.472715  343307 certs.go:257] generating profile certs ...
	I1009 19:27:38.472790  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key
	I1009 19:27:38.472829  343307 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.2f140c92
	I1009 19:27:38.472846  343307 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt.2f140c92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1009 19:27:38.846814  343307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt.2f140c92 ...
	I1009 19:27:38.846850  343307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt.2f140c92: {Name:mkc2191acbc8bdf29d69f0113598f387f3156525 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:38.847045  343307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.2f140c92 ...
	I1009 19:27:38.847059  343307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.2f140c92: {Name:mk4420d6a062c4dab2900704e5add4b492d36555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:38.847148  343307 certs.go:382] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt.2f140c92 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt
	I1009 19:27:38.847292  343307 certs.go:386] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.2f140c92 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key
	I1009 19:27:38.847425  343307 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key
	I1009 19:27:38.847442  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:38.847458  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:38.847476  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:38.847488  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:38.847504  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:38.847525  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:38.847541  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:38.847559  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:38.847611  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 19:27:38.847645  343307 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:38.847656  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:27:38.847681  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:38.847709  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:38.847733  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 19:27:38.847781  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:27:38.847811  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:38.847826  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem -> /usr/share/ca-certificates/296002.pem
	I1009 19:27:38.847838  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /usr/share/ca-certificates/2960022.pem
	I1009 19:27:38.848384  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:38.867598  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:38.888313  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:38.908288  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:27:38.929572  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 19:27:38.949045  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:27:38.966969  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:38.986319  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:27:39.012715  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:39.032678  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 19:27:39.051431  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 19:27:39.069614  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:27:39.090445  343307 ssh_runner.go:195] Run: openssl version
	I1009 19:27:39.098940  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:39.108430  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:39.119839  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:39.119907  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:39.188461  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:39.197309  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 19:27:39.212076  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 19:27:39.218737  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 19:27:39.218850  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 19:27:39.320003  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:39.338511  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 19:27:39.353078  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 19:27:39.358619  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 19:27:39.358736  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 19:27:39.417831  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:39.430407  343307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:39.437508  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:27:39.502060  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:27:39.549190  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:27:39.599910  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:27:39.657699  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:27:39.729015  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:27:39.791014  343307 kubeadm.go:400] StartCluster: {Name:ha-807463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:39.791208  343307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:27:39.791318  343307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:27:39.827907  343307 cri.go:89] found id: "9d475a483e7023b214d8a1506f2ba793d2cb34e4e0e7b5f0fc49d91b875116f7"
	I1009 19:27:39.827980  343307 cri.go:89] found id: "eb3eb3edb2fff30f90b98210a15c7960a0d8f4700c380a4bc2a236e3530d4043"
	I1009 19:27:39.828002  343307 cri.go:89] found id: "e4593fb70e6dd0047bc83f89897d4c1ad23896e5ca9a3628c4bbeea360f8cbaf"
	I1009 19:27:39.828027  343307 cri.go:89] found id: "60abd5bf9ea13b7e15b4cb133643cb620ae0f536d45d6ac30703be2e3ef7a45f"
	I1009 19:27:39.828064  343307 cri.go:89] found id: "4477522bd8536fe09afcc2397cd8beb927ccd19a6714098fb7bb1f3ef47595ea"
	I1009 19:27:39.828090  343307 cri.go:89] found id: ""
	I1009 19:27:39.828175  343307 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 19:27:39.846495  343307 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:27:39Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:27:39.846575  343307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:27:39.873447  343307 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:27:39.873525  343307 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:27:39.873618  343307 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:27:39.890893  343307 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:39.891370  343307 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-807463" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:27:39.891541  343307 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-294150/kubeconfig needs updating (will repair): [kubeconfig missing "ha-807463" cluster setting kubeconfig missing "ha-807463" context setting]
	I1009 19:27:39.891898  343307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:39.892555  343307 kapi.go:59] client config for ha-807463: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key", CAFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:27:39.893429  343307 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:27:39.893485  343307 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:27:39.893506  343307 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:27:39.893530  343307 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:27:39.893571  343307 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:27:39.894036  343307 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:27:39.894259  343307 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:27:39.909848  343307 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:27:39.909926  343307 kubeadm.go:601] duration metric: took 36.380579ms to restartPrimaryControlPlane
	I1009 19:27:39.909962  343307 kubeadm.go:402] duration metric: took 118.974675ms to StartCluster
	I1009 19:27:39.909997  343307 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:39.910102  343307 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:27:39.910819  343307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:39.911409  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:39.911493  343307 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:39.911613  343307 start.go:242] waiting for startup goroutines ...
	I1009 19:27:39.911544  343307 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:27:39.917562  343307 out.go:179] * Enabled addons: 
	I1009 19:27:39.920371  343307 addons.go:514] duration metric: took 8.815745ms for enable addons: enabled=[]
	I1009 19:27:39.920465  343307 start.go:247] waiting for cluster config update ...
	I1009 19:27:39.920489  343307 start.go:256] writing updated cluster config ...
	I1009 19:27:39.924923  343307 out.go:203] 
	I1009 19:27:39.928045  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:39.928167  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:39.931505  343307 out.go:179] * Starting "ha-807463-m02" control-plane node in "ha-807463" cluster
	I1009 19:27:39.934402  343307 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:39.937316  343307 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:39.940080  343307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:39.940107  343307 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:39.940210  343307 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:27:39.940220  343307 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:39.940348  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:39.940566  343307 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:39.975622  343307 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:39.975643  343307 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:39.975657  343307 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:39.975682  343307 start.go:361] acquireMachinesLock for ha-807463-m02: {Name:mk6ba8ff733306501b688f1b4a216ac9e405e90f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:39.975736  343307 start.go:365] duration metric: took 39.187µs to acquireMachinesLock for "ha-807463-m02"
	I1009 19:27:39.975756  343307 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:27:39.975761  343307 fix.go:55] fixHost starting: m02
	I1009 19:27:39.976050  343307 cli_runner.go:164] Run: docker container inspect ha-807463-m02 --format={{.State.Status}}
	I1009 19:27:40.012164  343307 fix.go:113] recreateIfNeeded on ha-807463-m02: state=Stopped err=<nil>
	W1009 19:27:40.012195  343307 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:27:40.015441  343307 out.go:252] * Restarting existing docker container for "ha-807463-m02" ...
	I1009 19:27:40.015539  343307 cli_runner.go:164] Run: docker start ha-807463-m02
	I1009 19:27:40.410002  343307 cli_runner.go:164] Run: docker container inspect ha-807463-m02 --format={{.State.Status}}
	I1009 19:27:40.445455  343307 kic.go:430] container "ha-807463-m02" state is running.
	I1009 19:27:40.445851  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m02
	I1009 19:27:40.474228  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:40.474476  343307 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:40.474538  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:40.505891  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:40.506192  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1009 19:27:40.506201  343307 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:40.506929  343307 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41996->127.0.0.1:33186: read: connection reset by peer
	I1009 19:27:43.729947  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463-m02
	
	I1009 19:27:43.729974  343307 ubuntu.go:182] provisioning hostname "ha-807463-m02"
	I1009 19:27:43.730046  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:43.750597  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:43.750914  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1009 19:27:43.750934  343307 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-807463-m02 && echo "ha-807463-m02" | sudo tee /etc/hostname
	I1009 19:27:44.042915  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463-m02
	
	I1009 19:27:44.043000  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:44.070967  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:44.071275  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1009 19:27:44.071306  343307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-807463-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-807463-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-807463-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:44.341979  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:44.342008  343307 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 19:27:44.342024  343307 ubuntu.go:190] setting up certificates
	I1009 19:27:44.342039  343307 provision.go:84] configureAuth start
	I1009 19:27:44.342104  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m02
	I1009 19:27:44.370782  343307 provision.go:143] copyHostCerts
	I1009 19:27:44.370832  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:27:44.370866  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 19:27:44.370878  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:27:44.370961  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 19:27:44.371063  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:27:44.371087  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 19:27:44.371095  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:27:44.371128  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 19:27:44.371178  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:27:44.371200  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 19:27:44.371210  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:27:44.371237  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 19:27:44.371335  343307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.ha-807463-m02 san=[127.0.0.1 192.168.49.3 ha-807463-m02 localhost minikube]
	I1009 19:27:45.671497  343307 provision.go:177] copyRemoteCerts
	I1009 19:27:45.671655  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:45.671727  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:45.689990  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:45.879571  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:45.879633  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:45.934252  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:45.934317  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:27:46.015412  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:46.015492  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:27:46.095867  343307 provision.go:87] duration metric: took 1.753810196s to configureAuth
	I1009 19:27:46.095898  343307 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:46.096158  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:46.096279  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:46.134871  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.135193  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1009 19:27:46.135215  343307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:47.743001  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:47.743025  343307 machine.go:96] duration metric: took 7.268539709s to provisionDockerMachine
	I1009 19:27:47.743037  343307 start.go:294] postStartSetup for "ha-807463-m02" (driver="docker")
	I1009 19:27:47.743048  343307 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:47.743114  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:47.743178  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:47.763602  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:47.878489  343307 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:47.882311  343307 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:47.882390  343307 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:47.882425  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 19:27:47.882513  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 19:27:47.882649  343307 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 19:27:47.882678  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /etc/ssl/certs/2960022.pem
	I1009 19:27:47.882829  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:47.895445  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:27:47.923753  343307 start.go:297] duration metric: took 180.689414ms for postStartSetup
	I1009 19:27:47.923906  343307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:47.923987  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:47.943574  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:48.072414  343307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:48.090538  343307 fix.go:57] duration metric: took 8.114767256s for fixHost
	I1009 19:27:48.090623  343307 start.go:84] releasing machines lock for "ha-807463-m02", held for 8.114877188s
	I1009 19:27:48.090728  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m02
	I1009 19:27:48.124084  343307 out.go:179] * Found network options:
	I1009 19:27:48.127431  343307 out.go:179]   - NO_PROXY=192.168.49.2
	W1009 19:27:48.131026  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	W1009 19:27:48.131071  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	I1009 19:27:48.131145  343307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:48.131185  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:48.131442  343307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:48.131511  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:48.169238  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:48.169825  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:48.682814  343307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:48.688162  343307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:48.688239  343307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:48.699171  343307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:27:48.699193  343307 start.go:496] detecting cgroup driver to use...
	I1009 19:27:48.699225  343307 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:27:48.699282  343307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:48.728026  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:48.752647  343307 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:48.752765  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:48.774861  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:48.799117  343307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:49.042961  343307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:49.283614  343307 docker.go:234] disabling docker service ...
	I1009 19:27:49.283734  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:49.307987  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:49.328204  343307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:49.580623  343307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:49.895453  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:49.919339  343307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:49.947539  343307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:49.947656  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.962511  343307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:27:49.962650  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.979924  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.995805  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:50.007931  343307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:50.028218  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:50.068031  343307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:50.096196  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:50.122544  343307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:50.151110  343307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:50.173303  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:50.489690  343307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:50.773593  343307 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:50.773686  343307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:50.777653  343307 start.go:564] Will wait 60s for crictl version
	I1009 19:27:50.777737  343307 ssh_runner.go:195] Run: which crictl
	I1009 19:27:50.781240  343307 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:50.810791  343307 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:50.810938  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:27:50.840800  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:27:50.876670  343307 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:50.879670  343307 out.go:179]   - env NO_PROXY=192.168.49.2
	I1009 19:27:50.882673  343307 cli_runner.go:164] Run: docker network inspect ha-807463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:50.898864  343307 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:50.902801  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:50.912892  343307 mustload.go:65] Loading cluster: ha-807463
	I1009 19:27:50.913185  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:50.913459  343307 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:27:50.931384  343307 host.go:66] Checking if "ha-807463" exists ...
	I1009 19:27:50.931675  343307 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463 for IP: 192.168.49.3
	I1009 19:27:50.931689  343307 certs.go:195] generating shared ca certs ...
	I1009 19:27:50.931705  343307 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.931837  343307 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 19:27:50.931898  343307 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 19:27:50.931911  343307 certs.go:257] generating profile certs ...
	I1009 19:27:50.931992  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key
	I1009 19:27:50.932059  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.0cec3fb8
	I1009 19:27:50.932139  343307 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key
	I1009 19:27:50.932153  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:50.932166  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:50.932181  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:50.932192  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:50.932209  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:50.932226  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:50.932242  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:50.932253  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:50.932306  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 19:27:50.932342  343307 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:50.932355  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:27:50.932378  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:50.932408  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:50.932435  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 19:27:50.932481  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:27:50.932513  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem -> /usr/share/ca-certificates/296002.pem
	I1009 19:27:50.932528  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /usr/share/ca-certificates/2960022.pem
	I1009 19:27:50.932539  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.932602  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:50.949747  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:51.053408  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1009 19:27:51.057364  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1009 19:27:51.066242  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1009 19:27:51.070160  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1009 19:27:51.082531  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1009 19:27:51.086523  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1009 19:27:51.095670  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1009 19:27:51.099538  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1009 19:27:51.108444  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1009 19:27:51.112383  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1009 19:27:51.121230  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1009 19:27:51.126634  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1009 19:27:51.135934  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:51.157827  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:51.177909  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:51.208380  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:27:51.233729  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 19:27:51.254881  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:27:51.273448  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:51.293146  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:27:51.312924  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 19:27:51.335482  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 19:27:51.355302  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:51.375754  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1009 19:27:51.391115  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1009 19:27:51.404527  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1009 19:27:51.418174  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1009 19:27:51.431794  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1009 19:27:51.445219  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1009 19:27:51.460138  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1009 19:27:51.473336  343307 ssh_runner.go:195] Run: openssl version
	I1009 19:27:51.480063  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 19:27:51.488916  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 19:27:51.493541  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 19:27:51.493662  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 19:27:51.535043  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:51.543247  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 19:27:51.552252  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 19:27:51.556439  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 19:27:51.556553  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 19:27:51.598587  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:51.607271  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:51.616125  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:51.620083  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:51.620175  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:51.664070  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:51.672785  343307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:51.676884  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:27:51.718930  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:27:51.761150  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:27:51.802284  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:27:51.843422  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:27:51.890388  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:27:51.931465  343307 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1009 19:27:51.931643  343307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-807463-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:51.931677  343307 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:51.931730  343307 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:51.945085  343307 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:51.945174  343307 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:51.945236  343307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:51.955208  343307 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:51.955321  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1009 19:27:51.963468  343307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 19:27:51.977048  343307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:51.990708  343307 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1009 19:27:52.008521  343307 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:52.012741  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:52.024091  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:52.162593  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:52.176738  343307 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:52.177297  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:52.180462  343307 out.go:179] * Verifying Kubernetes components...
	I1009 19:27:52.183354  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:52.328633  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:52.343053  343307 kapi.go:59] client config for ha-807463: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key", CAFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1009 19:27:52.343132  343307 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1009 19:27:52.343378  343307 node_ready.go:35] waiting up to 6m0s for node "ha-807463-m02" to be "Ready" ...
	I1009 19:28:12.417047  343307 node_ready.go:49] node "ha-807463-m02" is "Ready"
	I1009 19:28:12.417075  343307 node_ready.go:38] duration metric: took 20.07367073s for node "ha-807463-m02" to be "Ready" ...
	I1009 19:28:12.417087  343307 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:28:12.417171  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:12.917913  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:13.418163  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:13.917283  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:14.417776  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:14.441559  343307 api_server.go:72] duration metric: took 22.264725667s to wait for apiserver process to appear ...
	I1009 19:28:14.441582  343307 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:28:14.441601  343307 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1009 19:28:14.457402  343307 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1009 19:28:14.458648  343307 api_server.go:141] control plane version: v1.34.1
	I1009 19:28:14.458703  343307 api_server.go:131] duration metric: took 17.113274ms to wait for apiserver health ...
	I1009 19:28:14.458728  343307 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:28:14.470395  343307 system_pods.go:59] 26 kube-system pods found
	I1009 19:28:14.470439  343307 system_pods.go:61] "coredns-66bc5c9577-tswbs" [5837c6fe-278a-4b3a-98d1-79992fe9ea08] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:28:14.470449  343307 system_pods.go:61] "coredns-66bc5c9577-vkzgf" [80c50dd0-6a2c-4662-80d3-72f45754c3df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:28:14.470454  343307 system_pods.go:61] "etcd-ha-807463" [84964141-cf31-4652-9a3c-a9265edf4f8d] Running
	I1009 19:28:14.470459  343307 system_pods.go:61] "etcd-ha-807463-m02" [e91cfd04-5988-45ce-9dae-b204db6efe4e] Running
	I1009 19:28:14.470464  343307 system_pods.go:61] "etcd-ha-807463-m03" [26cd4bca-fd69-452f-b5a2-b9bbc5966ded] Running
	I1009 19:28:14.470473  343307 system_pods.go:61] "kindnet-bc8tf" [f003f127-5e25-434a-837b-d021fb0e3fa7] Running
	I1009 19:28:14.470477  343307 system_pods.go:61] "kindnet-dvwc7" [2a7512ff-e63c-4aa0-8b4e-fb241415067f] Running
	I1009 19:28:14.470483  343307 system_pods.go:61] "kindnet-gvpmq" [223d0c34-5384-4cd5-a0d2-842a422629ab] Running
	I1009 19:28:14.470488  343307 system_pods.go:61] "kindnet-rc46j" [22f58fe4-1d11-4259-b9f9-e8740b8b2257] Running
	I1009 19:28:14.470501  343307 system_pods.go:61] "kube-apiserver-ha-807463" [f6f353e4-8237-46db-a4a8-cd536448a79b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:28:14.470507  343307 system_pods.go:61] "kube-apiserver-ha-807463-m02" [3d8c0d4b-2cfb-4de6-8d9f-95e25e6f2a4e] Running
	I1009 19:28:14.470517  343307 system_pods.go:61] "kube-apiserver-ha-807463-m03" [a7b828f8-ab95-440a-b42e-e48d83bf3d20] Running
	I1009 19:28:14.470521  343307 system_pods.go:61] "kube-controller-manager-ha-807463" [e409b5f4-e73e-4270-bc1b-44b9a84123c7] Running
	I1009 19:28:14.470527  343307 system_pods.go:61] "kube-controller-manager-ha-807463-m02" [bce8c53d-0ba9-4e5f-93ca-06958824d9ba] Running
	I1009 19:28:14.470538  343307 system_pods.go:61] "kube-controller-manager-ha-807463-m03" [96d81c2f-668e-4729-aa2c-ab008af31ef1] Running
	I1009 19:28:14.470542  343307 system_pods.go:61] "kube-proxy-2lp2p" [cb605c64-8004-4f40-8e70-eb8e3184d3d6] Running
	I1009 19:28:14.470546  343307 system_pods.go:61] "kube-proxy-7lpbk" [d6ba71bf-d06d-4ade-b0e4-85303842110c] Running
	I1009 19:28:14.470550  343307 system_pods.go:61] "kube-proxy-b84dn" [9c10ee5e-8408-4b6f-985a-8d4f44a869cc] Running
	I1009 19:28:14.470555  343307 system_pods.go:61] "kube-proxy-vw7c5" [89df419c-841c-4a9c-af83-50e98327318d] Running
	I1009 19:28:14.470561  343307 system_pods.go:61] "kube-scheduler-ha-807463" [d577e200-00d6-4bac-aa67-0f7ef54c4d1a] Running
	I1009 19:28:14.470568  343307 system_pods.go:61] "kube-scheduler-ha-807463-m02" [848b94f3-79dc-44dc-8416-33c96451e0c0] Running
	I1009 19:28:14.470572  343307 system_pods.go:61] "kube-scheduler-ha-807463-m03" [f7153dac-0ede-40dc-b18c-1c03bebc8414] Running
	I1009 19:28:14.470578  343307 system_pods.go:61] "kube-vip-ha-807463" [f4f09ea9-0059-4cc4-9c0b-0ea2240a1885] Running
	I1009 19:28:14.470583  343307 system_pods.go:61] "kube-vip-ha-807463-m02" [98f28358-d9e9-4f8a-b407-b14baa34ea75] Running
	I1009 19:28:14.470589  343307 system_pods.go:61] "kube-vip-ha-807463-m03" [c150d4cd-1c28-4677-9a55-6e2d119daa81] Running
	I1009 19:28:14.470594  343307 system_pods.go:61] "storage-provisioner" [b9e8a81e-2bee-4542-b231-7490dfbf6065] Running
	I1009 19:28:14.470599  343307 system_pods.go:74] duration metric: took 11.85336ms to wait for pod list to return data ...
	I1009 19:28:14.470612  343307 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:28:14.482492  343307 default_sa.go:45] found service account: "default"
	I1009 19:28:14.482522  343307 default_sa.go:55] duration metric: took 11.902296ms for default service account to be created ...
	I1009 19:28:14.482532  343307 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:28:14.496415  343307 system_pods.go:86] 26 kube-system pods found
	I1009 19:28:14.496458  343307 system_pods.go:89] "coredns-66bc5c9577-tswbs" [5837c6fe-278a-4b3a-98d1-79992fe9ea08] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:28:14.496468  343307 system_pods.go:89] "coredns-66bc5c9577-vkzgf" [80c50dd0-6a2c-4662-80d3-72f45754c3df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:28:14.496475  343307 system_pods.go:89] "etcd-ha-807463" [84964141-cf31-4652-9a3c-a9265edf4f8d] Running
	I1009 19:28:14.496480  343307 system_pods.go:89] "etcd-ha-807463-m02" [e91cfd04-5988-45ce-9dae-b204db6efe4e] Running
	I1009 19:28:14.496484  343307 system_pods.go:89] "etcd-ha-807463-m03" [26cd4bca-fd69-452f-b5a2-b9bbc5966ded] Running
	I1009 19:28:14.496488  343307 system_pods.go:89] "kindnet-bc8tf" [f003f127-5e25-434a-837b-d021fb0e3fa7] Running
	I1009 19:28:14.496493  343307 system_pods.go:89] "kindnet-dvwc7" [2a7512ff-e63c-4aa0-8b4e-fb241415067f] Running
	I1009 19:28:14.496502  343307 system_pods.go:89] "kindnet-gvpmq" [223d0c34-5384-4cd5-a0d2-842a422629ab] Running
	I1009 19:28:14.496509  343307 system_pods.go:89] "kindnet-rc46j" [22f58fe4-1d11-4259-b9f9-e8740b8b2257] Running
	I1009 19:28:14.496517  343307 system_pods.go:89] "kube-apiserver-ha-807463" [f6f353e4-8237-46db-a4a8-cd536448a79b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:28:14.496523  343307 system_pods.go:89] "kube-apiserver-ha-807463-m02" [3d8c0d4b-2cfb-4de6-8d9f-95e25e6f2a4e] Running
	I1009 19:28:14.496534  343307 system_pods.go:89] "kube-apiserver-ha-807463-m03" [a7b828f8-ab95-440a-b42e-e48d83bf3d20] Running
	I1009 19:28:14.496539  343307 system_pods.go:89] "kube-controller-manager-ha-807463" [e409b5f4-e73e-4270-bc1b-44b9a84123c7] Running
	I1009 19:28:14.496544  343307 system_pods.go:89] "kube-controller-manager-ha-807463-m02" [bce8c53d-0ba9-4e5f-93ca-06958824d9ba] Running
	I1009 19:28:14.496553  343307 system_pods.go:89] "kube-controller-manager-ha-807463-m03" [96d81c2f-668e-4729-aa2c-ab008af31ef1] Running
	I1009 19:28:14.496557  343307 system_pods.go:89] "kube-proxy-2lp2p" [cb605c64-8004-4f40-8e70-eb8e3184d3d6] Running
	I1009 19:28:14.496561  343307 system_pods.go:89] "kube-proxy-7lpbk" [d6ba71bf-d06d-4ade-b0e4-85303842110c] Running
	I1009 19:28:14.496566  343307 system_pods.go:89] "kube-proxy-b84dn" [9c10ee5e-8408-4b6f-985a-8d4f44a869cc] Running
	I1009 19:28:14.496575  343307 system_pods.go:89] "kube-proxy-vw7c5" [89df419c-841c-4a9c-af83-50e98327318d] Running
	I1009 19:28:14.496579  343307 system_pods.go:89] "kube-scheduler-ha-807463" [d577e200-00d6-4bac-aa67-0f7ef54c4d1a] Running
	I1009 19:28:14.496583  343307 system_pods.go:89] "kube-scheduler-ha-807463-m02" [848b94f3-79dc-44dc-8416-33c96451e0c0] Running
	I1009 19:28:14.496587  343307 system_pods.go:89] "kube-scheduler-ha-807463-m03" [f7153dac-0ede-40dc-b18c-1c03bebc8414] Running
	I1009 19:28:14.496591  343307 system_pods.go:89] "kube-vip-ha-807463" [f4f09ea9-0059-4cc4-9c0b-0ea2240a1885] Running
	I1009 19:28:14.496597  343307 system_pods.go:89] "kube-vip-ha-807463-m02" [98f28358-d9e9-4f8a-b407-b14baa34ea75] Running
	I1009 19:28:14.496601  343307 system_pods.go:89] "kube-vip-ha-807463-m03" [c150d4cd-1c28-4677-9a55-6e2d119daa81] Running
	I1009 19:28:14.496609  343307 system_pods.go:89] "storage-provisioner" [b9e8a81e-2bee-4542-b231-7490dfbf6065] Running
	I1009 19:28:14.496616  343307 system_pods.go:126] duration metric: took 14.078508ms to wait for k8s-apps to be running ...
	I1009 19:28:14.496627  343307 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:28:14.496696  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:28:14.527254  343307 system_svc.go:56] duration metric: took 30.616666ms WaitForService to wait for kubelet
	I1009 19:28:14.527281  343307 kubeadm.go:586] duration metric: took 22.350452667s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:28:14.527300  343307 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:28:14.536047  343307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:28:14.536130  343307 node_conditions.go:123] node cpu capacity is 2
	I1009 19:28:14.536159  343307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:28:14.536184  343307 node_conditions.go:123] node cpu capacity is 2
	I1009 19:28:14.536225  343307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:28:14.536247  343307 node_conditions.go:123] node cpu capacity is 2
	I1009 19:28:14.536284  343307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:28:14.536308  343307 node_conditions.go:123] node cpu capacity is 2
	I1009 19:28:14.536330  343307 node_conditions.go:105] duration metric: took 9.020752ms to run NodePressure ...
	I1009 19:28:14.536373  343307 start.go:242] waiting for startup goroutines ...
	I1009 19:28:14.536414  343307 start.go:256] writing updated cluster config ...
	I1009 19:28:14.540247  343307 out.go:203] 
	I1009 19:28:14.543487  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:28:14.543686  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:28:14.547047  343307 out.go:179] * Starting "ha-807463-m03" control-plane node in "ha-807463" cluster
	I1009 19:28:14.550723  343307 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:28:14.553769  343307 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:28:14.556767  343307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:28:14.556832  343307 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:28:14.557073  343307 cache.go:58] Caching tarball of preloaded images
	I1009 19:28:14.557216  343307 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:28:14.557276  343307 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:28:14.557431  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:28:14.597092  343307 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:28:14.597123  343307 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:28:14.597144  343307 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:28:14.597168  343307 start.go:361] acquireMachinesLock for ha-807463-m03: {Name:mk0e43107ec0c9bc8c06da921397f514d91f61d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:28:14.597229  343307 start.go:365] duration metric: took 46.457µs to acquireMachinesLock for "ha-807463-m03"
	I1009 19:28:14.597250  343307 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:28:14.597255  343307 fix.go:55] fixHost starting: m03
	I1009 19:28:14.597512  343307 cli_runner.go:164] Run: docker container inspect ha-807463-m03 --format={{.State.Status}}
	I1009 19:28:14.632017  343307 fix.go:113] recreateIfNeeded on ha-807463-m03: state=Stopped err=<nil>
	W1009 19:28:14.632042  343307 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:28:14.635426  343307 out.go:252] * Restarting existing docker container for "ha-807463-m03" ...
	I1009 19:28:14.635514  343307 cli_runner.go:164] Run: docker start ha-807463-m03
	I1009 19:28:15.014352  343307 cli_runner.go:164] Run: docker container inspect ha-807463-m03 --format={{.State.Status}}
	I1009 19:28:15.044342  343307 kic.go:430] container "ha-807463-m03" state is running.
	I1009 19:28:15.044802  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m03
	I1009 19:28:15.084035  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:28:15.084294  343307 machine.go:93] provisionDockerMachine start ...
	I1009 19:28:15.084356  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:15.113499  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:28:15.113819  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33191 <nil> <nil>}
	I1009 19:28:15.113829  343307 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:28:15.114606  343307 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 19:28:18.387326  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463-m03
	
	I1009 19:28:18.387353  343307 ubuntu.go:182] provisioning hostname "ha-807463-m03"
	I1009 19:28:18.387421  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:18.414941  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:28:18.415269  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33191 <nil> <nil>}
	I1009 19:28:18.415288  343307 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-807463-m03 && echo "ha-807463-m03" | sudo tee /etc/hostname
	I1009 19:28:18.857505  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463-m03
	
	I1009 19:28:18.857586  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:18.886274  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:28:18.886587  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33191 <nil> <nil>}
	I1009 19:28:18.886603  343307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-807463-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-807463-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-807463-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:28:19.124493  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:28:19.124522  343307 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 19:28:19.124543  343307 ubuntu.go:190] setting up certificates
	I1009 19:28:19.124552  343307 provision.go:84] configureAuth start
	I1009 19:28:19.124639  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m03
	I1009 19:28:19.150744  343307 provision.go:143] copyHostCerts
	I1009 19:28:19.150791  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:28:19.150823  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 19:28:19.150839  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:28:19.150921  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 19:28:19.151006  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:28:19.151029  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 19:28:19.151037  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:28:19.151079  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 19:28:19.151132  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:28:19.151154  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 19:28:19.151159  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:28:19.151184  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 19:28:19.151236  343307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.ha-807463-m03 san=[127.0.0.1 192.168.49.4 ha-807463-m03 localhost minikube]
	I1009 19:28:20.594319  343307 provision.go:177] copyRemoteCerts
	I1009 19:28:20.594391  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:28:20.594445  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:20.617127  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:20.793603  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:28:20.793667  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:28:20.838358  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:28:20.838425  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:28:20.897009  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:28:20.897076  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:28:20.947823  343307 provision.go:87] duration metric: took 1.823247487s to configureAuth
	I1009 19:28:20.947854  343307 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:28:20.948102  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:28:20.948220  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:20.980853  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:28:20.981192  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33191 <nil> <nil>}
	I1009 19:28:20.981215  343307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:28:21.547892  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:28:21.547940  343307 machine.go:96] duration metric: took 6.463636002s to provisionDockerMachine
	I1009 19:28:21.547953  343307 start.go:294] postStartSetup for "ha-807463-m03" (driver="docker")
	I1009 19:28:21.547963  343307 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:28:21.548058  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:28:21.548103  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:21.574619  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:21.688699  343307 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:28:21.693344  343307 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:28:21.693371  343307 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:28:21.693382  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 19:28:21.693440  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 19:28:21.693513  343307 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 19:28:21.693520  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /etc/ssl/certs/2960022.pem
	I1009 19:28:21.693621  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:28:21.703022  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:28:21.726060  343307 start.go:297] duration metric: took 178.090392ms for postStartSetup
	I1009 19:28:21.726183  343307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:28:21.726252  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:21.754232  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:21.887060  343307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:28:21.902692  343307 fix.go:57] duration metric: took 7.305428838s for fixHost
	I1009 19:28:21.902721  343307 start.go:84] releasing machines lock for "ha-807463-m03", held for 7.305481549s
	I1009 19:28:21.902791  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m03
	I1009 19:28:21.935444  343307 out.go:179] * Found network options:
	I1009 19:28:21.938464  343307 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1009 19:28:21.941326  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	W1009 19:28:21.941366  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	W1009 19:28:21.941390  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	W1009 19:28:21.941399  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	I1009 19:28:21.941489  343307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:28:21.941533  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:21.941553  343307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:28:21.941612  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:21.971654  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:21.991268  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:22.521550  343307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:28:22.531247  343307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:28:22.531361  343307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:28:22.554768  343307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:28:22.554843  343307 start.go:496] detecting cgroup driver to use...
	I1009 19:28:22.554892  343307 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:28:22.554962  343307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:28:22.583220  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:28:22.599310  343307 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:28:22.599403  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:28:22.632291  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:28:22.653641  343307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:28:23.037548  343307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:28:23.288869  343307 docker.go:234] disabling docker service ...
	I1009 19:28:23.288983  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:28:23.316355  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:28:23.341879  343307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:28:23.636459  343307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:28:23.958882  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:28:24.002025  343307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:28:24.060081  343307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:28:24.060153  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.094554  343307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:28:24.094632  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.113879  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.124444  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.134135  343307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:28:24.153071  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.164683  343307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.175420  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.185724  343307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:28:24.196010  343307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:28:24.206389  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:28:24.403396  343307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:29:54.625257  343307 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.2217701s)
	I1009 19:29:54.625289  343307 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:29:54.625347  343307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:29:54.629422  343307 start.go:564] Will wait 60s for crictl version
	I1009 19:29:54.629487  343307 ssh_runner.go:195] Run: which crictl
	I1009 19:29:54.633348  343307 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:29:54.664178  343307 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:29:54.664263  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:29:54.695047  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:29:54.726968  343307 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:29:54.729882  343307 out.go:179]   - env NO_PROXY=192.168.49.2
	I1009 19:29:54.732783  343307 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1009 19:29:54.735745  343307 cli_runner.go:164] Run: docker network inspect ha-807463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:29:54.754488  343307 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:29:54.758549  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:29:54.769025  343307 mustload.go:65] Loading cluster: ha-807463
	I1009 19:29:54.769312  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:29:54.769581  343307 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:29:54.789308  343307 host.go:66] Checking if "ha-807463" exists ...
	I1009 19:29:54.789631  343307 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463 for IP: 192.168.49.4
	I1009 19:29:54.789648  343307 certs.go:195] generating shared ca certs ...
	I1009 19:29:54.789665  343307 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:29:54.789790  343307 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 19:29:54.789840  343307 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 19:29:54.789852  343307 certs.go:257] generating profile certs ...
	I1009 19:29:54.789935  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key
	I1009 19:29:54.790005  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.8f59bad3
	I1009 19:29:54.790050  343307 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key
	I1009 19:29:54.790063  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:29:54.790075  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:29:54.790096  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:29:54.790112  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:29:54.790124  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:29:54.790141  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:29:54.790152  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:29:54.790162  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:29:54.790217  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 19:29:54.790247  343307 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 19:29:54.790255  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:29:54.790279  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:29:54.790304  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:29:54.790325  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 19:29:54.790366  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:29:54.790392  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /usr/share/ca-certificates/2960022.pem
	I1009 19:29:54.790404  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:29:54.790415  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem -> /usr/share/ca-certificates/296002.pem
	I1009 19:29:54.790566  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:29:54.807723  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:29:54.905478  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1009 19:29:54.915115  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1009 19:29:54.924123  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1009 19:29:54.927867  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1009 19:29:54.936366  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1009 19:29:54.940038  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1009 19:29:54.948153  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1009 19:29:54.952558  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1009 19:29:54.962178  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1009 19:29:54.966425  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1009 19:29:54.974761  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1009 19:29:54.978501  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1009 19:29:54.987786  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:29:55.037480  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:29:55.060963  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:29:55.082145  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:29:55.105188  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 19:29:55.128516  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:29:55.149252  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:29:55.172354  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:29:55.193857  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 19:29:55.219080  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:29:55.237634  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 19:29:55.256720  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1009 19:29:55.279349  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1009 19:29:55.298083  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1009 19:29:55.312857  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1009 19:29:55.328467  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1009 19:29:55.343367  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1009 19:29:55.357598  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1009 19:29:55.374321  343307 ssh_runner.go:195] Run: openssl version
	I1009 19:29:55.380839  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 19:29:55.389522  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 19:29:55.394545  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 19:29:55.394618  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 19:29:55.437345  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:29:55.447436  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:29:55.456198  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:29:55.460194  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:29:55.460288  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:29:55.502457  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:29:55.511155  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 19:29:55.519603  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 19:29:55.523571  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 19:29:55.523682  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 19:29:55.565661  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 19:29:55.575332  343307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:29:55.579545  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:29:55.620938  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:29:55.663052  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:29:55.708075  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:29:55.749078  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:29:55.800791  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:29:55.844259  343307 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1009 19:29:55.844433  343307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-807463-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:29:55.844463  343307 kube-vip.go:115] generating kube-vip config ...
	I1009 19:29:55.844514  343307 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:29:55.857076  343307 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:29:55.857168  343307 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:29:55.857232  343307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:29:55.865620  343307 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:29:55.865690  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1009 19:29:55.873976  343307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 19:29:55.888496  343307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:29:55.902132  343307 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1009 19:29:55.918614  343307 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:29:55.922408  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:29:55.932872  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:29:56.078754  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:29:56.098490  343307 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:29:56.098835  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:29:56.102467  343307 out.go:179] * Verifying Kubernetes components...
	I1009 19:29:56.105295  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:29:56.244415  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:29:56.260645  343307 kapi.go:59] client config for ha-807463: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key", CAFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1009 19:29:56.260766  343307 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1009 19:29:56.261025  343307 node_ready.go:35] waiting up to 6m0s for node "ha-807463-m03" to be "Ready" ...
	W1009 19:29:58.265441  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:00.338043  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:02.766376  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:05.271576  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:07.765013  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:09.766174  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:12.268909  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:14.764872  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:16.768216  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:19.265861  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:21.764655  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:23.765433  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:26.265822  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:28.267509  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:30.765442  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:33.266200  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:35.765625  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:38.265302  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:40.265407  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:42.270313  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:44.765053  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:47.264227  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:49.264310  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:51.264693  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:53.266262  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:55.765430  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:57.765657  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:00.296961  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:02.765162  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:05.265758  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:07.270661  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:09.764829  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:11.766346  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:14.265615  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:16.765212  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:19.264362  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:21.265737  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:23.765070  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:26.265524  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:28.764786  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:30.765098  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:33.265489  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:35.270526  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:37.764838  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:40.265487  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:42.765053  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:45.269843  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:47.765589  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:49.766098  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:52.274275  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:54.765171  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:57.265540  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:59.265763  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:01.270860  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:03.765024  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:06.265424  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:08.766290  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:10.766762  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:13.264661  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:15.265789  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:17.765441  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:19.765504  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:22.269835  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:24.764880  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:26.764993  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:28.765201  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:30.765672  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:33.269831  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:35.271203  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:37.764975  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:39.765423  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:42.271235  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:44.765366  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:47.264895  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:49.267101  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:51.764961  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:53.765546  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:55.765910  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:58.272156  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:00.765521  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:03.265015  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:05.265319  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:07.764930  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:09.765819  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:12.270731  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:14.764917  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:16.765423  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:19.265783  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:21.268655  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:23.764590  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:25.765798  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:28.266110  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:30.765102  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:33.272016  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:35.765481  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:38.266269  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:40.268920  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:42.764575  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:44.765157  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:47.271446  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:49.764820  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:51.765204  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:54.271178  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:56.765244  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:59.264746  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:01.265757  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:03.266309  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:05.765330  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:08.271832  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:10.764901  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:13.271000  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:15.764750  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:18.271187  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:20.764309  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:22.764554  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:24.765015  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:27.265491  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:29.269747  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:31.765383  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:34.265977  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:36.271158  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:38.764726  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:41.269997  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:43.765647  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:46.264806  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:48.264841  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:50.265171  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:52.273405  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:54.764904  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:56.772617  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:59.264570  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:01.266121  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:03.764578  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:05.765062  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:07.765743  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:10.264753  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:12.267514  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:14.271366  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:16.764238  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:18.764646  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:21.264582  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:23.765647  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:26.265493  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:28.765534  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:31.266108  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:33.271209  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:35.765495  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:38.264544  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:40.265777  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:42.765010  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:45.320159  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:47.765477  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:50.267171  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:52.764971  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:54.765424  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	I1009 19:35:56.261403  343307 node_ready.go:38] duration metric: took 6m0.00032425s for node "ha-807463-m03" to be "Ready" ...
	I1009 19:35:56.264406  343307 out.go:203] 
	W1009 19:35:56.267318  343307 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:35:56.267354  343307 out.go:285] * 
	W1009 19:35:56.269757  343307 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:35:56.272075  343307 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:28:13 ha-807463 crio[664]: time="2025-10-09T19:28:13.304067783Z" level=info msg="Started container" PID=1189 containerID=0e94c30541006adea7a9cf430df1905830797b4065898a1ff96a0a8704efcde5 description=kube-system/coredns-66bc5c9577-tswbs/coredns id=bd231ca3-3cb5-417c-a27f-e7e210bd2614 name=/runtime.v1.RuntimeService/StartContainer sandboxID=215954c6e5b58ec4e1876606af4120f74fa1b735788f97d908b617d088e10218
	Oct 09 19:28:43 ha-807463 conmon[1165]: conmon 49b67bb8cba0ee99aca2 <ninfo>: container 1170 exited with status 1
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.956113094Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b2497cc7-982c-4437-8e10-8451b3daa825 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.957275756Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ea766a4c-b850-4d02-b94c-15910e120466 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.958652007Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=441b7fe6-c8e8-4480-a875-e58f7cbbc12c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.958885919Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.970702736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.970987881Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/934ecdd5b34159ac9e9805425bf47a7191ad8753b0f07efbbd463b24fea61539/merged/etc/passwd: no such file or directory"
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.971020948Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/934ecdd5b34159ac9e9805425bf47a7191ad8753b0f07efbbd463b24fea61539/merged/etc/group: no such file or directory"
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.971300818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.999493974Z" level=info msg="Created container 1416e569d8f8fe0cb15febba45212fdd6fb1718a9812f18587def66caefda3e1: kube-system/storage-provisioner/storage-provisioner" id=441b7fe6-c8e8-4480-a875-e58f7cbbc12c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:28:44 ha-807463 crio[664]: time="2025-10-09T19:28:44.001575564Z" level=info msg="Starting container: 1416e569d8f8fe0cb15febba45212fdd6fb1718a9812f18587def66caefda3e1" id=2fe204ab-fca6-41e1-b709-a74e76e04d48 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:28:44 ha-807463 crio[664]: time="2025-10-09T19:28:44.008428394Z" level=info msg="Started container" PID=1408 containerID=1416e569d8f8fe0cb15febba45212fdd6fb1718a9812f18587def66caefda3e1 description=kube-system/storage-provisioner/storage-provisioner id=2fe204ab-fca6-41e1-b709-a74e76e04d48 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7eb46fc741382f55fe16d9dcb41b62c8d30783b6fa783d2d33a2516785da8030
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.522067171Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.525680672Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.525854736Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.525929903Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.529201175Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.52923686Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.529253697Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.534099464Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.534352454Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.534487904Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.538988916Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.539025699Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	1416e569d8f8f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   7eb46fc741382       storage-provisioner                 kube-system
	0e94c30541006       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   215954c6e5b58       coredns-66bc5c9577-tswbs            kube-system
	9adc2cdd19000       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   55085f7167d14       kindnet-rc46j                       kube-system
	49b67bb8cba0e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   7eb46fc741382       storage-provisioner                 kube-system
	dc6736e2d83ca       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   7 minutes ago       Running             kube-vip                  1                   e1b7344c7d94c       kube-vip-ha-807463                  kube-system
	ca7bc93dc4dcf       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   833e6871e62e2       coredns-66bc5c9577-vkzgf            kube-system
	38276ddd00795       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   3daf554657528       busybox-7b57f96db7-5z2cl            default
	9f1fd2b441bae       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   77fe5d534a437       kube-proxy-b84dn                    kube-system
	71e4e3ae2d80c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   7 minutes ago       Running             kube-controller-manager   2                   5d2bd7a9c54dd       kube-controller-manager-ha-807463   kube-system
	9d475a483e702       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            1                   4ee70f1fb5f58       kube-apiserver-ha-807463            kube-system
	eb3eb3edb2fff       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   1                   5d2bd7a9c54dd       kube-controller-manager-ha-807463   kube-system
	e4593fb70e6dd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   2d270c8563e10       kube-scheduler-ha-807463            kube-system
	60abd5bf9ea13       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Exited              kube-vip                  0                   e1b7344c7d94c       kube-vip-ha-807463                  kube-system
	4477522bd8536       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   a372bed836bce       etcd-ha-807463                      kube-system
	
	
	==> coredns [0e94c30541006adea7a9cf430df1905830797b4065898a1ff96a0a8704efcde5] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35674 - 64583 "HINFO IN 6906546124599759769.4081405551742000183. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033625487s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [ca7bc93dc4dcf853db34af69a749d22d607d653f5e3ef5777c55ac602fd2a298] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56146 - 65116 "HINFO IN 9083642706827740027.9059612721108159707. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020353957s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-807463
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-807463
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=ha-807463
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_22_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:22:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-807463
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:35:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:34:40 +0000   Thu, 09 Oct 2025 19:22:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:34:40 +0000   Thu, 09 Oct 2025 19:22:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:34:40 +0000   Thu, 09 Oct 2025 19:22:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:34:40 +0000   Thu, 09 Oct 2025 19:22:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-807463
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 97aacaba546643c3a96be1e87893b40c
	  System UUID:                97caddd7-ad20-4ad3-87a9-90a149a84db2
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-5z2cl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-tswbs             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-vkzgf             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-807463                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-rc46j                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-807463             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-807463    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-b84dn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-807463             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-807463                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m44s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)      kubelet          Node ha-807463 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-807463 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-807463 status is now: NodeHasSufficientMemory
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-807463 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-807463 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-807463 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-807463 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	  Normal   RegisteredNode           8m41s                  node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	  Normal   Starting                 8m19s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m19s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m19s (x8 over 8m19s)  kubelet          Node ha-807463 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m19s (x8 over 8m19s)  kubelet          Node ha-807463 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m19s (x8 over 8m19s)  kubelet          Node ha-807463 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m42s                  node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	  Normal   RegisteredNode           7m32s                  node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	
	
	Name:               ha-807463-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-807463-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=ha-807463
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_09T19_23_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:23:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-807463-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:35:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:33:46 +0000   Thu, 09 Oct 2025 19:23:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:33:46 +0000   Thu, 09 Oct 2025 19:23:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:33:46 +0000   Thu, 09 Oct 2025 19:23:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:33:46 +0000   Thu, 09 Oct 2025 19:23:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-807463-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 615ff68d05a240648cf06e5cd58bdb14
	  System UUID:                4a17c7be-c74f-481f-8bf2-76a62cd3a90f
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-xqc7g                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-807463-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-gvpmq                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-807463-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-807463-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-7lpbk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-807463-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-807463-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m34s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   RegisteredNode           12m                    node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	  Normal   NodeHasSufficientPID     9m22s (x8 over 9m22s)  kubelet          Node ha-807463-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m22s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m22s (x8 over 9m22s)  kubelet          Node ha-807463-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m22s (x8 over 9m22s)  kubelet          Node ha-807463-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           8m42s                  node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	  Normal   Starting                 8m17s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m17s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m16s (x8 over 8m17s)  kubelet          Node ha-807463-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m16s (x8 over 8m17s)  kubelet          Node ha-807463-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m16s (x8 over 8m17s)  kubelet          Node ha-807463-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m43s                  node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	  Normal   RegisteredNode           7m33s                  node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	
	
	Name:               ha-807463-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-807463-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=ha-807463
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_09T19_24_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:24:31 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-807463-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:27:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 09 Oct 2025 19:26:55 +0000   Thu, 09 Oct 2025 19:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 09 Oct 2025 19:26:55 +0000   Thu, 09 Oct 2025 19:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 09 Oct 2025 19:26:55 +0000   Thu, 09 Oct 2025 19:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 09 Oct 2025 19:26:55 +0000   Thu, 09 Oct 2025 19:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-807463-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 4795d50994764edd82f89cad6576dbc5
	  System UUID:                45d49f75-27b1-4381-a391-59141171cd17
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-99qlt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-807463-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-dvwc7                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-807463-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-807463-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-vw7c5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-807463-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-807463-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        11m    kube-proxy       
	  Normal  RegisteredNode  11m    node-controller  Node ha-807463-m03 event: Registered Node ha-807463-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-807463-m03 event: Registered Node ha-807463-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-807463-m03 event: Registered Node ha-807463-m03 in Controller
	  Normal  RegisteredNode  8m42s  node-controller  Node ha-807463-m03 event: Registered Node ha-807463-m03 in Controller
	  Normal  RegisteredNode  7m43s  node-controller  Node ha-807463-m03 event: Registered Node ha-807463-m03 in Controller
	  Normal  RegisteredNode  7m33s  node-controller  Node ha-807463-m03 event: Registered Node ha-807463-m03 in Controller
	  Normal  NodeNotReady    6m53s  node-controller  Node ha-807463-m03 status is now: NodeNotReady
	
	
	Name:               ha-807463-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-807463-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=ha-807463
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_09T19_25_45_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:25:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-807463-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:26:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 09 Oct 2025 19:25:58 +0000   Thu, 09 Oct 2025 19:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 09 Oct 2025 19:25:58 +0000   Thu, 09 Oct 2025 19:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 09 Oct 2025 19:25:58 +0000   Thu, 09 Oct 2025 19:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 09 Oct 2025 19:25:58 +0000   Thu, 09 Oct 2025 19:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-807463-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc067731848740afab5ce03812f74006
	  System UUID:                0f2358b6-a095-45f9-8a33-badc490163a8
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bc8tf       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-2lp2p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-807463-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-807463-m04 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-807463-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-807463-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m42s              node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   RegisteredNode           7m43s              node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   RegisteredNode           7m33s              node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   NodeNotReady             6m53s              node-controller  Node ha-807463-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct 9 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015195] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.531968] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036847] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.757016] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.932356] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 9 18:02] hrtimer: interrupt took 20603549 ns
	[Oct 9 18:59] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 9 19:02] overlayfs: idmapped layers are currently not supported
	[  +0.066862] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 9 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:23] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:25] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[  +3.297009] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:28] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4477522bd8536fe09afcc2397cd8beb927ccd19a6714098fb7bb1f3ef47595ea] <==
	{"level":"warn","ts":"2025-10-09T19:35:29.743781Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"95a22811bdce1330","rtt":"173.409777ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:33.697898Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:33.697963Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:34.737604Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"95a22811bdce1330","rtt":"154.736798ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:34.744661Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"95a22811bdce1330","rtt":"173.409777ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:37.699812Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:37.699877Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:39.738301Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"95a22811bdce1330","rtt":"154.736798ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:39.745428Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"95a22811bdce1330","rtt":"173.409777ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:41.700912Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:41.700973Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:44.738667Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"95a22811bdce1330","rtt":"154.736798ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:44.746370Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"95a22811bdce1330","rtt":"173.409777ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:45.702592Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:45.702647Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:49.704512Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:49.704562Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:49.739178Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"95a22811bdce1330","rtt":"154.736798ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:49.746917Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"95a22811bdce1330","rtt":"173.409777ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:53.706247Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:53.706306Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:54.739992Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"95a22811bdce1330","rtt":"154.736798ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:54.748135Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"95a22811bdce1330","rtt":"173.409777ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:57.707713Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:57.707776Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	
	
	==> kernel <==
	 19:35:58 up  2:18,  0 user,  load average: 1.08, 1.30, 1.56
	Linux ha-807463 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9adc2cdd19000926b9c7696c7b7924afabffb77a3346b0bea81bc99d3f74aa0f] <==
	I1009 19:35:23.523261       1 main.go:301] handling current node
	I1009 19:35:33.521028       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:35:33.521061       1 main.go:301] handling current node
	I1009 19:35:33.521076       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1009 19:35:33.521082       1 main.go:324] Node ha-807463-m02 has CIDR [10.244.1.0/24] 
	I1009 19:35:33.521316       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1009 19:35:33.521332       1 main.go:324] Node ha-807463-m03 has CIDR [10.244.2.0/24] 
	I1009 19:35:33.521390       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1009 19:35:33.521401       1 main.go:324] Node ha-807463-m04 has CIDR [10.244.3.0/24] 
	I1009 19:35:43.527640       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:35:43.527674       1 main.go:301] handling current node
	I1009 19:35:43.527690       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1009 19:35:43.527696       1 main.go:324] Node ha-807463-m02 has CIDR [10.244.1.0/24] 
	I1009 19:35:43.527890       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1009 19:35:43.527932       1 main.go:324] Node ha-807463-m03 has CIDR [10.244.2.0/24] 
	I1009 19:35:43.528029       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1009 19:35:43.528041       1 main.go:324] Node ha-807463-m04 has CIDR [10.244.3.0/24] 
	I1009 19:35:53.521146       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:35:53.521182       1 main.go:301] handling current node
	I1009 19:35:53.521198       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1009 19:35:53.521204       1 main.go:324] Node ha-807463-m02 has CIDR [10.244.1.0/24] 
	I1009 19:35:53.521367       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1009 19:35:53.521387       1 main.go:324] Node ha-807463-m03 has CIDR [10.244.2.0/24] 
	I1009 19:35:53.521449       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1009 19:35:53.521462       1 main.go:324] Node ha-807463-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9d475a483e7023b214d8a1506f2ba793d2cb34e4e0e7b5f0fc49d91b875116f7] <==
	E1009 19:28:12.347613       1 watcher.go:335] watch chan error: etcdserver: no leader
	E1009 19:28:12.347635       1 watcher.go:335] watch chan error: etcdserver: no leader
	E1009 19:28:12.349427       1 watcher.go:335] watch chan error: etcdserver: no leader
	E1009 19:28:12.349481       1 watcher.go:335] watch chan error: etcdserver: no leader
	{"level":"warn","ts":"2025-10-09T19:28:12.355259Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355532Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b2d2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355680Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b2d2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355754Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355815Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400046cb40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355846Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400126d680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355882Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355910Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000e925a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355975Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001959680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.357750Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001959680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.357862Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400126cb40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	E1009 19:28:12.360026       1 watcher.go:335] watch chan error: etcdserver: no leader
	E1009 19:28:12.360254       1 watcher.go:335] watch chan error: etcdserver: no leader
	{"level":"warn","ts":"2025-10-09T19:28:12.373075Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.373191Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.373230Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	I1009 19:28:12.408675       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1009 19:28:13.946618       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3 192.168.49.4]
	I1009 19:28:15.990769       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 19:28:16.088830       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:28:22.340287       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [71e4e3ae2d80c0bff2e415aa94adbf172f0541a980a58bc060eaf4114ebfa411] <==
	I1009 19:28:15.789862       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 19:28:15.789956       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-807463-m03"
	I1009 19:28:15.790006       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-807463-m04"
	I1009 19:28:15.790043       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-807463"
	I1009 19:28:15.790073       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-807463-m02"
	I1009 19:28:15.792034       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 19:28:15.792087       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1009 19:28:15.816012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 19:28:15.816096       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 19:28:15.816262       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:28:15.816289       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 19:28:15.816340       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1009 19:28:15.816365       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 19:28:15.853279       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 19:28:15.853454       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1009 19:28:15.853525       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1009 19:28:15.853569       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1009 19:28:15.900161       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:28:15.900708       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:28:15.900757       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:28:15.900945       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:28:45.936143       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-f6lp8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-f6lp8\": the object has been modified; please apply your changes to the latest version and try again"
	I1009 19:28:45.936767       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"d7915f4a-fefa-4618-a648-059d33b61abc", APIVersion:"v1", ResourceVersion:"291", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-f6lp8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-f6lp8": the object has been modified; please apply your changes to the latest version and try again
	I1009 19:34:15.936504       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-99qlt"
	E1009 19:34:16.137977       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-controller-manager [eb3eb3edb2fff30f90b98210a15c7960a0d8f4700c380a4bc2a236e3530d4043] <==
	I1009 19:27:40.800035       1 serving.go:386] Generated self-signed cert in-memory
	I1009 19:27:45.392772       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1009 19:27:45.392919       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:27:45.408597       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1009 19:27:45.408878       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:27:45.409007       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1009 19:27:45.409053       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1009 19:28:00.394482       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the reques
t from succeeding"
	
	
	==> kube-proxy [9f1fd2b441bae8a1e1677da06354cd58eb9120cf79ae41fd89aade0d9e36317b] <==
	I1009 19:28:13.524866       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:28:13.683998       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:28:13.785200       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:28:13.785297       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1009 19:28:13.785401       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:28:13.850524       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:28:13.850775       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:28:13.858532       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:28:13.859447       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:28:13.859472       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:28:13.869614       1 config.go:200] "Starting service config controller"
	I1009 19:28:13.869702       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:28:13.869759       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:28:13.869806       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:28:13.869854       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:28:13.869903       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:28:13.870681       1 config.go:309] "Starting node config controller"
	I1009 19:28:13.870751       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:28:13.870783       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:28:13.977741       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:28:13.979480       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:28:13.979510       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e4593fb70e6dd0047bc83f89897d4c1ad23896e5ca9a3628c4bbeea360f8cbaf] <==
	E1009 19:27:48.441390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 19:27:48.441455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 19:27:48.441529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 19:27:48.441597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 19:27:48.441717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 19:27:48.441800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 19:27:48.441887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1009 19:27:48.441935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 19:27:49.269919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 19:27:49.288585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 19:27:49.311114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 19:27:49.371959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1009 19:27:49.404581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 19:27:49.410730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 19:27:49.410883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 19:27:49.418641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 19:27:49.443744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1009 19:27:49.470207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 19:27:49.520778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1009 19:27:49.544432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 19:27:49.566871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 19:27:49.622487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1009 19:27:49.659599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 19:27:49.667074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1009 19:27:51.424577       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.615218     800 apiserver.go:52] "Watching apiserver"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.620110     800 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-807463" podUID="2851b5b6-b28e-4749-8fba-920501dc7be3"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.622751     800 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663228     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b9e8a81e-2bee-4542-b231-7490dfbf6065-tmp\") pod \"storage-provisioner\" (UID: \"b9e8a81e-2bee-4542-b231-7490dfbf6065\") " pod="kube-system/storage-provisioner"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663304     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c10ee5e-8408-4b6f-985a-8d4f44a869cc-xtables-lock\") pod \"kube-proxy-b84dn\" (UID: \"9c10ee5e-8408-4b6f-985a-8d4f44a869cc\") " pod="kube-system/kube-proxy-b84dn"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663360     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/22f58fe4-1d11-4259-b9f9-e8740b8b2257-cni-cfg\") pod \"kindnet-rc46j\" (UID: \"22f58fe4-1d11-4259-b9f9-e8740b8b2257\") " pod="kube-system/kindnet-rc46j"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663389     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c10ee5e-8408-4b6f-985a-8d4f44a869cc-lib-modules\") pod \"kube-proxy-b84dn\" (UID: \"9c10ee5e-8408-4b6f-985a-8d4f44a869cc\") " pod="kube-system/kube-proxy-b84dn"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663421     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22f58fe4-1d11-4259-b9f9-e8740b8b2257-xtables-lock\") pod \"kindnet-rc46j\" (UID: \"22f58fe4-1d11-4259-b9f9-e8740b8b2257\") " pod="kube-system/kindnet-rc46j"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663440     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22f58fe4-1d11-4259-b9f9-e8740b8b2257-lib-modules\") pod \"kindnet-rc46j\" (UID: \"22f58fe4-1d11-4259-b9f9-e8740b8b2257\") " pod="kube-system/kindnet-rc46j"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.667816     800 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="976e04e1cbea4b516ead31d4a83e047c" path="/var/lib/kubelet/pods/976e04e1cbea4b516ead31d4a83e047c/volumes"
	Oct 09 19:28:00 ha-807463 kubelet[800]: I1009 19:28:00.774505     800 scope.go:117] "RemoveContainer" containerID="eb3eb3edb2fff30f90b98210a15c7960a0d8f4700c380a4bc2a236e3530d4043"
	Oct 09 19:28:10 ha-807463 kubelet[800]: E1009 19:28:10.305261     800 controller.go:195] "Failed to update lease" err="Put \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-807463?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Oct 09 19:28:10 ha-807463 kubelet[800]: E1009 19:28:10.446000     800 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-09T19:28:00Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-09T19:28:00Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-09T19:28:00Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-09T19:28:00Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"re
cursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"ha-807463\": Patch \"https://192.168.49.2:8443/api/v1/nodes/ha-807463/status?timeout=10s\": context deadline exceeded"
	Oct 09 19:28:12 ha-807463 kubelet[800]: I1009 19:28:12.468697     800 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 09 19:28:12 ha-807463 kubelet[800]: I1009 19:28:12.552182     800 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-807463"
	Oct 09 19:28:12 ha-807463 kubelet[800]: I1009 19:28:12.552222     800 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-807463"
	Oct 09 19:28:12 ha-807463 kubelet[800]: W1009 19:28:12.667154     800 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/crio-3daf554657528d08ab602a2eafcc6211b760b3734a78136296b70f4b7a32baf0 WatchSource:0}: Error finding container 3daf554657528d08ab602a2eafcc6211b760b3734a78136296b70f4b7a32baf0: Status 404 returned error can't find the container with id 3daf554657528d08ab602a2eafcc6211b760b3734a78136296b70f4b7a32baf0
	Oct 09 19:28:12 ha-807463 kubelet[800]: W1009 19:28:12.708992     800 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/crio-833e6871e62e2720786472951e1248b710ee0b6ab3e58c51a072c96c41234008 WatchSource:0}: Error finding container 833e6871e62e2720786472951e1248b710ee0b6ab3e58c51a072c96c41234008: Status 404 returned error can't find the container with id 833e6871e62e2720786472951e1248b710ee0b6ab3e58c51a072c96c41234008
	Oct 09 19:28:12 ha-807463 kubelet[800]: I1009 19:28:12.824883     800 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-807463" podUID="2851b5b6-b28e-4749-8fba-920501dc7be3"
	Oct 09 19:28:12 ha-807463 kubelet[800]: I1009 19:28:12.854312     800 scope.go:117] "RemoveContainer" containerID="60abd5bf9ea13b7e15b4cb133643cb620ae0f536d45d6ac30703be2e3ef7a45f"
	Oct 09 19:28:13 ha-807463 kubelet[800]: W1009 19:28:13.100847     800 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/crio-215954c6e5b58ec4e1876606af4120f74fa1b735788f97d908b617d088e10218 WatchSource:0}: Error finding container 215954c6e5b58ec4e1876606af4120f74fa1b735788f97d908b617d088e10218: Status 404 returned error can't find the container with id 215954c6e5b58ec4e1876606af4120f74fa1b735788f97d908b617d088e10218
	Oct 09 19:28:13 ha-807463 kubelet[800]: I1009 19:28:13.258189     800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-807463" podStartSLOduration=1.258171868 podStartE2EDuration="1.258171868s" podCreationTimestamp="2025-10-09 19:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:28:13.202642686 +0000 UTC m=+34.717581260" watchObservedRunningTime="2025-10-09 19:28:13.258171868 +0000 UTC m=+34.773110434"
	Oct 09 19:28:38 ha-807463 kubelet[800]: E1009 19:28:38.614610     800 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa\": container with ID starting with 75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa not found: ID does not exist" containerID="75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa"
	Oct 09 19:28:38 ha-807463 kubelet[800]: I1009 19:28:38.614682     800 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa" err="rpc error: code = NotFound desc = could not find container \"75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa\": container with ID starting with 75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa not found: ID does not exist"
	Oct 09 19:28:43 ha-807463 kubelet[800]: I1009 19:28:43.955424     800 scope.go:117] "RemoveContainer" containerID="49b67bb8cba0ee99aca2811ac91734a84329f896cb75fab3ad456d53105ce0a1"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-807463 -n ha-807463
helpers_test.go:269: (dbg) Run:  kubectl --context ha-807463 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-hm827
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-807463 describe pod busybox-7b57f96db7-hm827
helpers_test.go:290: (dbg) kubectl --context ha-807463 describe pod busybox-7b57f96db7-hm827:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-hm827
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d8g9g (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-d8g9g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  103s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  103s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (535.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-807463 node delete m03 --alsologtostderr -v 5: (6.017403832s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-807463 status --alsologtostderr -v 5: exit status 7 (633.685461ms)

                                                
                                                
-- stdout --
	ha-807463
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-807463-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-807463-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:36:05.348795  349277 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:36:05.349003  349277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:36:05.349019  349277 out.go:374] Setting ErrFile to fd 2...
	I1009 19:36:05.349025  349277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:36:05.349357  349277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:36:05.349592  349277 out.go:368] Setting JSON to false
	I1009 19:36:05.349641  349277 mustload.go:65] Loading cluster: ha-807463
	I1009 19:36:05.349714  349277 notify.go:221] Checking for updates...
	I1009 19:36:05.350185  349277 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:36:05.350208  349277 status.go:174] checking status of ha-807463 ...
	I1009 19:36:05.351835  349277 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:36:05.370989  349277 status.go:371] ha-807463 host status = "Running" (err=<nil>)
	I1009 19:36:05.371014  349277 host.go:66] Checking if "ha-807463" exists ...
	I1009 19:36:05.371331  349277 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463
	I1009 19:36:05.397137  349277 host.go:66] Checking if "ha-807463" exists ...
	I1009 19:36:05.397446  349277 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:36:05.397495  349277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:36:05.422333  349277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:36:05.527167  349277 ssh_runner.go:195] Run: systemctl --version
	I1009 19:36:05.534579  349277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:36:05.548454  349277 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:36:05.620821  349277 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 19:36:05.611115345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:36:05.621438  349277 kubeconfig.go:125] found "ha-807463" server: "https://192.168.49.254:8443"
	I1009 19:36:05.621474  349277 api_server.go:166] Checking apiserver status ...
	I1009 19:36:05.621542  349277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:36:05.633920  349277 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/959/cgroup
	I1009 19:36:05.645244  349277 api_server.go:182] apiserver freezer: "12:freezer:/docker/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/crio/crio-9d475a483e7023b214d8a1506f2ba793d2cb34e4e0e7b5f0fc49d91b875116f7"
	I1009 19:36:05.645349  349277 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/crio/crio-9d475a483e7023b214d8a1506f2ba793d2cb34e4e0e7b5f0fc49d91b875116f7/freezer.state
	I1009 19:36:05.654032  349277 api_server.go:204] freezer state: "THAWED"
	I1009 19:36:05.654063  349277 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1009 19:36:05.664991  349277 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1009 19:36:05.665024  349277 status.go:463] ha-807463 apiserver status = Running (err=<nil>)
	I1009 19:36:05.665037  349277 status.go:176] ha-807463 status: &{Name:ha-807463 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:36:05.665057  349277 status.go:174] checking status of ha-807463-m02 ...
	I1009 19:36:05.665409  349277 cli_runner.go:164] Run: docker container inspect ha-807463-m02 --format={{.State.Status}}
	I1009 19:36:05.683027  349277 status.go:371] ha-807463-m02 host status = "Running" (err=<nil>)
	I1009 19:36:05.683056  349277 host.go:66] Checking if "ha-807463-m02" exists ...
	I1009 19:36:05.683376  349277 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m02
	I1009 19:36:05.703907  349277 host.go:66] Checking if "ha-807463-m02" exists ...
	I1009 19:36:05.704297  349277 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:36:05.704361  349277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:36:05.731220  349277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:36:05.834805  349277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:36:05.848680  349277 kubeconfig.go:125] found "ha-807463" server: "https://192.168.49.254:8443"
	I1009 19:36:05.848756  349277 api_server.go:166] Checking apiserver status ...
	I1009 19:36:05.848841  349277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:36:05.865940  349277 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup
	I1009 19:36:05.877437  349277 api_server.go:182] apiserver freezer: "12:freezer:/docker/dd3185e669c04d77eb25e9cb7d4804ea140c9ded2bbbb135f8c3fd8ff10126ec/crio/crio-790696fc1ed7cad712687773416bdb9b0f82a2c630f9acd856faf46c8934bcf1"
	I1009 19:36:05.877541  349277 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/dd3185e669c04d77eb25e9cb7d4804ea140c9ded2bbbb135f8c3fd8ff10126ec/crio/crio-790696fc1ed7cad712687773416bdb9b0f82a2c630f9acd856faf46c8934bcf1/freezer.state
	I1009 19:36:05.885859  349277 api_server.go:204] freezer state: "THAWED"
	I1009 19:36:05.885889  349277 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1009 19:36:05.895297  349277 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1009 19:36:05.895329  349277 status.go:463] ha-807463-m02 apiserver status = Running (err=<nil>)
	I1009 19:36:05.895338  349277 status.go:176] ha-807463-m02 status: &{Name:ha-807463-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:36:05.895355  349277 status.go:174] checking status of ha-807463-m04 ...
	I1009 19:36:05.895739  349277 cli_runner.go:164] Run: docker container inspect ha-807463-m04 --format={{.State.Status}}
	I1009 19:36:05.917590  349277 status.go:371] ha-807463-m04 host status = "Stopped" (err=<nil>)
	I1009 19:36:05.917616  349277 status.go:384] host is not running, skipping remaining checks
	I1009 19:36:05.917623  349277 status.go:176] ha-807463-m04 status: &{Name:ha-807463-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-807463 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-807463
helpers_test.go:243: (dbg) docker inspect ha-807463:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6",
	        "Created": "2025-10-09T19:22:12.218448558Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 343436,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:27:31.498701729Z",
	            "FinishedAt": "2025-10-09T19:27:30.881285461Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/hostname",
	        "HostsPath": "/var/lib/docker/containers/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/hosts",
	        "LogPath": "/var/lib/docker/containers/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6-json.log",
	        "Name": "/ha-807463",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-807463:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-807463",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6",
	                "LowerDir": "/var/lib/docker/overlay2/501f3dc17989cbf113e3e1d86a2dc5dbf4a1ebf96c1051617a1e82e0c118ddb2-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/501f3dc17989cbf113e3e1d86a2dc5dbf4a1ebf96c1051617a1e82e0c118ddb2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/501f3dc17989cbf113e3e1d86a2dc5dbf4a1ebf96c1051617a1e82e0c118ddb2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/501f3dc17989cbf113e3e1d86a2dc5dbf4a1ebf96c1051617a1e82e0c118ddb2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-807463",
	                "Source": "/var/lib/docker/volumes/ha-807463/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-807463",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-807463",
	                "name.minikube.sigs.k8s.io": "ha-807463",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "519a0261c9568d4a6f9cab4a02626789b917d4097449bf7d122da62e1553ad90",
	            "SandboxKey": "/var/run/docker/netns/519a0261c956",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-807463": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:d7:45:51:f4:8a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3847a657768484ae039efdd09e2b590403676178eb4c67c06a2221fe144c70b7",
	                    "EndpointID": "1be139014228dabc7add444f5a4d8325f46a753a08b0696634c3bb797577acd0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-807463",
	                        "fea8f67be9d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-807463 -n ha-807463
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-807463 logs -n 25: (1.341427203s)
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-807463 ssh -n ha-807463-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m02 sudo cat /home/docker/cp-test_ha-807463-m03_ha-807463-m02.txt                                         │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp ha-807463-m03:/home/docker/cp-test.txt ha-807463-m04:/home/docker/cp-test_ha-807463-m03_ha-807463-m04.txt               │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test_ha-807463-m03_ha-807463-m04.txt                                         │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp testdata/cp-test.txt ha-807463-m04:/home/docker/cp-test.txt                                                             │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp ha-807463-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1218422779/001/cp-test_ha-807463-m04.txt │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp ha-807463-m04:/home/docker/cp-test.txt ha-807463:/home/docker/cp-test_ha-807463-m04_ha-807463.txt                       │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463 sudo cat /home/docker/cp-test_ha-807463-m04_ha-807463.txt                                                 │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp ha-807463-m04:/home/docker/cp-test.txt ha-807463-m02:/home/docker/cp-test_ha-807463-m04_ha-807463-m02.txt               │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m02 sudo cat /home/docker/cp-test_ha-807463-m04_ha-807463-m02.txt                                         │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp ha-807463-m04:/home/docker/cp-test.txt ha-807463-m03:/home/docker/cp-test_ha-807463-m04_ha-807463-m03.txt               │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m03 sudo cat /home/docker/cp-test_ha-807463-m04_ha-807463-m03.txt                                         │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ node    │ ha-807463 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ node    │ ha-807463 node start m02 --alsologtostderr -v 5                                                                                      │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:27 UTC │
	│ node    │ ha-807463 node list --alsologtostderr -v 5                                                                                           │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │                     │
	│ stop    │ ha-807463 stop --alsologtostderr -v 5                                                                                                │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │ 09 Oct 25 19:27 UTC │
	│ start   │ ha-807463 start --wait true --alsologtostderr -v 5                                                                                   │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │                     │
	│ node    │ ha-807463 node list --alsologtostderr -v 5                                                                                           │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │                     │
	│ node    │ ha-807463 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:36 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:27:31
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:27:31.218830  343307 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:27:31.218980  343307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:31.218993  343307 out.go:374] Setting ErrFile to fd 2...
	I1009 19:27:31.219013  343307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:31.219307  343307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:27:31.219769  343307 out.go:368] Setting JSON to false
	I1009 19:27:31.220680  343307 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7791,"bootTime":1760030261,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 19:27:31.220751  343307 start.go:143] virtualization:  
	I1009 19:27:31.225902  343307 out.go:179] * [ha-807463] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:27:31.229045  343307 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:27:31.229154  343307 notify.go:221] Checking for updates...
	I1009 19:27:31.235436  343307 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:27:31.238296  343307 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:27:31.241057  343307 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 19:27:31.243947  343307 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:27:31.246781  343307 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:27:31.250030  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:31.250184  343307 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:27:31.286472  343307 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:27:31.286604  343307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:31.343705  343307 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-09 19:27:31.334706362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:27:31.343816  343307 docker.go:319] overlay module found
	I1009 19:27:31.346870  343307 out.go:179] * Using the docker driver based on existing profile
	I1009 19:27:31.349767  343307 start.go:309] selected driver: docker
	I1009 19:27:31.349786  343307 start.go:930] validating driver "docker" against &{Name:ha-807463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:31.349926  343307 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:27:31.350028  343307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:31.412249  343307 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-09 19:27:31.403030574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:27:31.412653  343307 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:27:31.412689  343307 cni.go:84] Creating CNI manager for ""
	I1009 19:27:31.412755  343307 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1009 19:27:31.412799  343307 start.go:353] cluster config:
	{Name:ha-807463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:31.417709  343307 out.go:179] * Starting "ha-807463" primary control-plane node in "ha-807463" cluster
	I1009 19:27:31.420530  343307 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:31.423466  343307 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:31.426321  343307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:31.426392  343307 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:27:31.426406  343307 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:31.426410  343307 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:31.426490  343307 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:27:31.426508  343307 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:31.426650  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:31.445925  343307 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:31.445951  343307 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:31.445969  343307 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:31.446007  343307 start.go:361] acquireMachinesLock for ha-807463: {Name:mk7b03a6b271157d59e205354be444442bc66672 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:31.446069  343307 start.go:365] duration metric: took 41.674µs to acquireMachinesLock for "ha-807463"
	I1009 19:27:31.446095  343307 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:27:31.446101  343307 fix.go:55] fixHost starting: 
	I1009 19:27:31.446358  343307 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:27:31.463339  343307 fix.go:113] recreateIfNeeded on ha-807463: state=Stopped err=<nil>
	W1009 19:27:31.463369  343307 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:27:31.466724  343307 out.go:252] * Restarting existing docker container for "ha-807463" ...
	I1009 19:27:31.466808  343307 cli_runner.go:164] Run: docker start ha-807463
	I1009 19:27:31.729554  343307 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:27:31.752533  343307 kic.go:430] container "ha-807463" state is running.
	I1009 19:27:31.752940  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463
	I1009 19:27:31.776613  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:31.776858  343307 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:31.776933  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:31.798253  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:31.798586  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1009 19:27:31.798603  343307 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:31.799247  343307 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 19:27:34.945362  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463
	
	I1009 19:27:34.945397  343307 ubuntu.go:182] provisioning hostname "ha-807463"
	I1009 19:27:34.945467  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:34.962891  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:34.963208  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1009 19:27:34.963226  343307 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-807463 && echo "ha-807463" | sudo tee /etc/hostname
	I1009 19:27:35.120375  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463
	
	I1009 19:27:35.120459  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:35.138932  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:35.139244  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1009 19:27:35.139259  343307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-807463' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-807463/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-807463' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:35.285402  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:35.285451  343307 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 19:27:35.285478  343307 ubuntu.go:190] setting up certificates
	I1009 19:27:35.285488  343307 provision.go:84] configureAuth start
	I1009 19:27:35.285558  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463
	I1009 19:27:35.302829  343307 provision.go:143] copyHostCerts
	I1009 19:27:35.302873  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:27:35.302904  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 19:27:35.302917  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:27:35.303005  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 19:27:35.303096  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:27:35.303118  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 19:27:35.303127  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:27:35.303156  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 19:27:35.303204  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:27:35.303225  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 19:27:35.303230  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:27:35.303255  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 19:27:35.303308  343307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.ha-807463 san=[127.0.0.1 192.168.49.2 ha-807463 localhost minikube]
	I1009 19:27:35.901224  343307 provision.go:177] copyRemoteCerts
	I1009 19:27:35.901289  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:35.901355  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:35.918214  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.021624  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:36.021693  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:36.040520  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:36.040583  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:27:36.059254  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:36.059315  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:27:36.078084  343307 provision.go:87] duration metric: took 792.56918ms to configureAuth
	I1009 19:27:36.078112  343307 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:36.078344  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:36.078465  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.095675  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:36.095992  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1009 19:27:36.096012  343307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:36.425006  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:36.425081  343307 machine.go:96] duration metric: took 4.648205511s to provisionDockerMachine
	I1009 19:27:36.425141  343307 start.go:294] postStartSetup for "ha-807463" (driver="docker")
	I1009 19:27:36.425177  343307 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:36.425298  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:36.425384  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.449453  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.553510  343307 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:36.557246  343307 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:36.557278  343307 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:36.557290  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 19:27:36.557367  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 19:27:36.557489  343307 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 19:27:36.557501  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /etc/ssl/certs/2960022.pem
	I1009 19:27:36.557607  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:36.565210  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:27:36.583083  343307 start.go:297] duration metric: took 157.903278ms for postStartSetup
	I1009 19:27:36.583210  343307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:36.583282  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.600612  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.698274  343307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:36.703016  343307 fix.go:57] duration metric: took 5.256907577s for fixHost
	I1009 19:27:36.703042  343307 start.go:84] releasing machines lock for "ha-807463", held for 5.256957103s
	I1009 19:27:36.703115  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463
	I1009 19:27:36.720370  343307 ssh_runner.go:195] Run: cat /version.json
	I1009 19:27:36.720385  343307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:36.720422  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.720451  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.743233  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.753326  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.948710  343307 ssh_runner.go:195] Run: systemctl --version
	I1009 19:27:36.955436  343307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:36.994992  343307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:37.001157  343307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:37.001242  343307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:37.015899  343307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:27:37.015931  343307 start.go:496] detecting cgroup driver to use...
	I1009 19:27:37.016002  343307 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:27:37.016099  343307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:37.034350  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:37.049609  343307 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:37.049706  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:37.065757  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:37.079370  343307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:37.204726  343307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:37.324926  343307 docker.go:234] disabling docker service ...
	I1009 19:27:37.325051  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:37.340669  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:37.354186  343307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:37.468499  343307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:37.609321  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:37.623308  343307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:37.638872  343307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:37.638957  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.648255  343307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:27:37.648376  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.658302  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.667181  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.675984  343307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:37.685440  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.694680  343307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.702750  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.711421  343307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:37.719182  343307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:37.727483  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:37.841375  343307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:37.980708  343307 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:37.980812  343307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:37.984807  343307 start.go:564] Will wait 60s for crictl version
	I1009 19:27:37.984933  343307 ssh_runner.go:195] Run: which crictl
	I1009 19:27:37.988572  343307 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:38.021983  343307 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:38.022073  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:27:38.052703  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:27:38.085238  343307 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:38.088088  343307 cli_runner.go:164] Run: docker network inspect ha-807463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:38.104470  343307 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:38.108353  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:38.118588  343307 kubeadm.go:883] updating cluster {Name:ha-807463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:27:38.118741  343307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:38.118810  343307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:38.155316  343307 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:38.155341  343307 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:27:38.155400  343307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:38.184223  343307 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:38.184246  343307 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:27:38.184257  343307 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:27:38.184370  343307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-807463 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:38.184448  343307 ssh_runner.go:195] Run: crio config
	I1009 19:27:38.252414  343307 cni.go:84] Creating CNI manager for ""
	I1009 19:27:38.252436  343307 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1009 19:27:38.252454  343307 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:27:38.252488  343307 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-807463 NodeName:ha-807463 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:27:38.252634  343307 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-807463"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:27:38.252656  343307 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:38.252721  343307 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:38.265014  343307 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:38.265147  343307 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:38.265209  343307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:38.272978  343307 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:38.273096  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:27:38.280861  343307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:27:38.294726  343307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:38.307657  343307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1009 19:27:38.320684  343307 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1009 19:27:38.333393  343307 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:38.337014  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:38.346725  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:38.455808  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:38.472442  343307 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463 for IP: 192.168.49.2
	I1009 19:27:38.472472  343307 certs.go:195] generating shared ca certs ...
	I1009 19:27:38.472489  343307 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:38.472635  343307 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 19:27:38.472702  343307 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 19:27:38.472715  343307 certs.go:257] generating profile certs ...
	I1009 19:27:38.472790  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key
	I1009 19:27:38.472829  343307 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.2f140c92
	I1009 19:27:38.472846  343307 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt.2f140c92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1009 19:27:38.846814  343307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt.2f140c92 ...
	I1009 19:27:38.846850  343307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt.2f140c92: {Name:mkc2191acbc8bdf29d69f0113598f387f3156525 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:38.847045  343307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.2f140c92 ...
	I1009 19:27:38.847059  343307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.2f140c92: {Name:mk4420d6a062c4dab2900704e5add4b492d36555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:38.847148  343307 certs.go:382] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt.2f140c92 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt
	I1009 19:27:38.847292  343307 certs.go:386] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.2f140c92 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key
	I1009 19:27:38.847425  343307 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key
	I1009 19:27:38.847442  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:38.847458  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:38.847476  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:38.847488  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:38.847504  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:38.847525  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:38.847541  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:38.847559  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:38.847611  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 19:27:38.847645  343307 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:38.847656  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:27:38.847681  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:38.847709  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:38.847733  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 19:27:38.847781  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:27:38.847811  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:38.847826  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem -> /usr/share/ca-certificates/296002.pem
	I1009 19:27:38.847838  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /usr/share/ca-certificates/2960022.pem
	I1009 19:27:38.848384  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:38.867598  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:38.888313  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:38.908288  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:27:38.929572  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 19:27:38.949045  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:27:38.966969  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:38.986319  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:27:39.012715  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:39.032678  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 19:27:39.051431  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 19:27:39.069614  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:27:39.090445  343307 ssh_runner.go:195] Run: openssl version
	I1009 19:27:39.098940  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:39.108430  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:39.119839  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:39.119907  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:39.188461  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:39.197309  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 19:27:39.212076  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 19:27:39.218737  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 19:27:39.218850  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 19:27:39.320003  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:39.338511  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 19:27:39.353078  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 19:27:39.358619  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 19:27:39.358736  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 19:27:39.417831  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:39.430407  343307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:39.437508  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:27:39.502060  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:27:39.549190  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:27:39.599910  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:27:39.657699  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:27:39.729015  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:27:39.791014  343307 kubeadm.go:400] StartCluster: {Name:ha-807463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:39.791208  343307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:27:39.791318  343307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:27:39.827907  343307 cri.go:89] found id: "9d475a483e7023b214d8a1506f2ba793d2cb34e4e0e7b5f0fc49d91b875116f7"
	I1009 19:27:39.827980  343307 cri.go:89] found id: "eb3eb3edb2fff30f90b98210a15c7960a0d8f4700c380a4bc2a236e3530d4043"
	I1009 19:27:39.828002  343307 cri.go:89] found id: "e4593fb70e6dd0047bc83f89897d4c1ad23896e5ca9a3628c4bbeea360f8cbaf"
	I1009 19:27:39.828027  343307 cri.go:89] found id: "60abd5bf9ea13b7e15b4cb133643cb620ae0f536d45d6ac30703be2e3ef7a45f"
	I1009 19:27:39.828064  343307 cri.go:89] found id: "4477522bd8536fe09afcc2397cd8beb927ccd19a6714098fb7bb1f3ef47595ea"
	I1009 19:27:39.828090  343307 cri.go:89] found id: ""
	I1009 19:27:39.828175  343307 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 19:27:39.846495  343307 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:27:39Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:27:39.846575  343307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:27:39.873447  343307 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:27:39.873525  343307 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:27:39.873618  343307 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:27:39.890893  343307 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:39.891370  343307 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-807463" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:27:39.891541  343307 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-294150/kubeconfig needs updating (will repair): [kubeconfig missing "ha-807463" cluster setting kubeconfig missing "ha-807463" context setting]
	I1009 19:27:39.891898  343307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:39.892555  343307 kapi.go:59] client config for ha-807463: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key", CAFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:27:39.893429  343307 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:27:39.893485  343307 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:27:39.893506  343307 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:27:39.893530  343307 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:27:39.893571  343307 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:27:39.894036  343307 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:27:39.894259  343307 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:27:39.909848  343307 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:27:39.909926  343307 kubeadm.go:601] duration metric: took 36.380579ms to restartPrimaryControlPlane
	I1009 19:27:39.909962  343307 kubeadm.go:402] duration metric: took 118.974675ms to StartCluster
	I1009 19:27:39.909997  343307 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:39.910102  343307 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:27:39.910819  343307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:39.911409  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:39.911493  343307 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:39.911613  343307 start.go:242] waiting for startup goroutines ...
	I1009 19:27:39.911544  343307 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:27:39.917562  343307 out.go:179] * Enabled addons: 
	I1009 19:27:39.920371  343307 addons.go:514] duration metric: took 8.815745ms for enable addons: enabled=[]
	I1009 19:27:39.920465  343307 start.go:247] waiting for cluster config update ...
	I1009 19:27:39.920489  343307 start.go:256] writing updated cluster config ...
	I1009 19:27:39.924923  343307 out.go:203] 
	I1009 19:27:39.928045  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:39.928167  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:39.931505  343307 out.go:179] * Starting "ha-807463-m02" control-plane node in "ha-807463" cluster
	I1009 19:27:39.934402  343307 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:39.937316  343307 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:39.940080  343307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:39.940107  343307 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:39.940210  343307 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:27:39.940220  343307 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:39.940348  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:39.940566  343307 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:39.975622  343307 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:39.975643  343307 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:39.975657  343307 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:39.975682  343307 start.go:361] acquireMachinesLock for ha-807463-m02: {Name:mk6ba8ff733306501b688f1b4a216ac9e405e90f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:39.975736  343307 start.go:365] duration metric: took 39.187µs to acquireMachinesLock for "ha-807463-m02"
	I1009 19:27:39.975756  343307 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:27:39.975761  343307 fix.go:55] fixHost starting: m02
	I1009 19:27:39.976050  343307 cli_runner.go:164] Run: docker container inspect ha-807463-m02 --format={{.State.Status}}
	I1009 19:27:40.012164  343307 fix.go:113] recreateIfNeeded on ha-807463-m02: state=Stopped err=<nil>
	W1009 19:27:40.012195  343307 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:27:40.015441  343307 out.go:252] * Restarting existing docker container for "ha-807463-m02" ...
	I1009 19:27:40.015539  343307 cli_runner.go:164] Run: docker start ha-807463-m02
	I1009 19:27:40.410002  343307 cli_runner.go:164] Run: docker container inspect ha-807463-m02 --format={{.State.Status}}
	I1009 19:27:40.445455  343307 kic.go:430] container "ha-807463-m02" state is running.
	I1009 19:27:40.445851  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m02
	I1009 19:27:40.474228  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:40.474476  343307 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:40.474538  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:40.505891  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:40.506192  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1009 19:27:40.506201  343307 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:40.506929  343307 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41996->127.0.0.1:33186: read: connection reset by peer
	I1009 19:27:43.729947  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463-m02
	
	I1009 19:27:43.729974  343307 ubuntu.go:182] provisioning hostname "ha-807463-m02"
	I1009 19:27:43.730046  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:43.750597  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:43.750914  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1009 19:27:43.750934  343307 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-807463-m02 && echo "ha-807463-m02" | sudo tee /etc/hostname
	I1009 19:27:44.042915  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463-m02
	
	I1009 19:27:44.043000  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:44.070967  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:44.071275  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1009 19:27:44.071306  343307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-807463-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-807463-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-807463-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:44.341979  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:44.342008  343307 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 19:27:44.342024  343307 ubuntu.go:190] setting up certificates
	I1009 19:27:44.342039  343307 provision.go:84] configureAuth start
	I1009 19:27:44.342104  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m02
	I1009 19:27:44.370782  343307 provision.go:143] copyHostCerts
	I1009 19:27:44.370832  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:27:44.370866  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 19:27:44.370878  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:27:44.370961  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 19:27:44.371063  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:27:44.371087  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 19:27:44.371095  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:27:44.371128  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 19:27:44.371178  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:27:44.371200  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 19:27:44.371210  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:27:44.371237  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 19:27:44.371335  343307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.ha-807463-m02 san=[127.0.0.1 192.168.49.3 ha-807463-m02 localhost minikube]
	I1009 19:27:45.671497  343307 provision.go:177] copyRemoteCerts
	I1009 19:27:45.671655  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:45.671727  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:45.689990  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:45.879571  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:45.879633  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:45.934252  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:45.934317  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:27:46.015412  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:46.015492  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:27:46.095867  343307 provision.go:87] duration metric: took 1.753810196s to configureAuth
	I1009 19:27:46.095898  343307 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:46.096158  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:46.096279  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:46.134871  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.135193  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1009 19:27:46.135215  343307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:47.743001  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:47.743025  343307 machine.go:96] duration metric: took 7.268539709s to provisionDockerMachine
	I1009 19:27:47.743037  343307 start.go:294] postStartSetup for "ha-807463-m02" (driver="docker")
	I1009 19:27:47.743048  343307 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:47.743114  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:47.743178  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:47.763602  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:47.878489  343307 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:47.882311  343307 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:47.882390  343307 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:47.882425  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 19:27:47.882513  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 19:27:47.882649  343307 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 19:27:47.882678  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /etc/ssl/certs/2960022.pem
	I1009 19:27:47.882829  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:47.895445  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:27:47.923753  343307 start.go:297] duration metric: took 180.689414ms for postStartSetup
	I1009 19:27:47.923906  343307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:47.923987  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:47.943574  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:48.072414  343307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:48.090538  343307 fix.go:57] duration metric: took 8.114767256s for fixHost
	I1009 19:27:48.090623  343307 start.go:84] releasing machines lock for "ha-807463-m02", held for 8.114877188s
	I1009 19:27:48.090728  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m02
	I1009 19:27:48.124084  343307 out.go:179] * Found network options:
	I1009 19:27:48.127431  343307 out.go:179]   - NO_PROXY=192.168.49.2
	W1009 19:27:48.131026  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	W1009 19:27:48.131071  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	I1009 19:27:48.131145  343307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:48.131185  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:48.131442  343307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:48.131511  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:48.169238  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:48.169825  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:48.682814  343307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:48.688162  343307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:48.688239  343307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:48.699171  343307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:27:48.699193  343307 start.go:496] detecting cgroup driver to use...
	I1009 19:27:48.699225  343307 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:27:48.699282  343307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:48.728026  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:48.752647  343307 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:48.752765  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:48.774861  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:48.799117  343307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:49.042961  343307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:49.283614  343307 docker.go:234] disabling docker service ...
	I1009 19:27:49.283734  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:49.307987  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:49.328204  343307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:49.580623  343307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:49.895453  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:49.919339  343307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:49.947539  343307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:49.947656  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.962511  343307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:27:49.962650  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.979924  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.995805  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:50.007931  343307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:50.028218  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:50.068031  343307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:50.096196  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:50.122544  343307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:50.151110  343307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:50.173303  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:50.489690  343307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:50.773593  343307 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:50.773686  343307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:50.777653  343307 start.go:564] Will wait 60s for crictl version
	I1009 19:27:50.777737  343307 ssh_runner.go:195] Run: which crictl
	I1009 19:27:50.781240  343307 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:50.810791  343307 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:50.810938  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:27:50.840800  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:27:50.876670  343307 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:50.879670  343307 out.go:179]   - env NO_PROXY=192.168.49.2
	I1009 19:27:50.882673  343307 cli_runner.go:164] Run: docker network inspect ha-807463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:50.898864  343307 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:50.902801  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:50.912892  343307 mustload.go:65] Loading cluster: ha-807463
	I1009 19:27:50.913185  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:50.913459  343307 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:27:50.931384  343307 host.go:66] Checking if "ha-807463" exists ...
	I1009 19:27:50.931675  343307 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463 for IP: 192.168.49.3
	I1009 19:27:50.931689  343307 certs.go:195] generating shared ca certs ...
	I1009 19:27:50.931705  343307 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.931837  343307 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 19:27:50.931898  343307 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 19:27:50.931911  343307 certs.go:257] generating profile certs ...
	I1009 19:27:50.931992  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key
	I1009 19:27:50.932059  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.0cec3fb8
	I1009 19:27:50.932139  343307 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key
	I1009 19:27:50.932153  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:50.932166  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:50.932181  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:50.932192  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:50.932209  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:50.932226  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:50.932242  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:50.932253  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:50.932306  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 19:27:50.932342  343307 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:50.932355  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:27:50.932378  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:50.932408  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:50.932435  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 19:27:50.932481  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:27:50.932513  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem -> /usr/share/ca-certificates/296002.pem
	I1009 19:27:50.932528  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /usr/share/ca-certificates/2960022.pem
	I1009 19:27:50.932539  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.932602  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:50.949747  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:51.053408  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1009 19:27:51.057364  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1009 19:27:51.066242  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1009 19:27:51.070160  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1009 19:27:51.082531  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1009 19:27:51.086523  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1009 19:27:51.095670  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1009 19:27:51.099538  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1009 19:27:51.108444  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1009 19:27:51.112383  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1009 19:27:51.121230  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1009 19:27:51.126634  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1009 19:27:51.135934  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:51.157827  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:51.177909  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:51.208380  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:27:51.233729  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 19:27:51.254881  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:27:51.273448  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:51.293146  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:27:51.312924  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 19:27:51.335482  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 19:27:51.355302  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:51.375754  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1009 19:27:51.391115  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1009 19:27:51.404527  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1009 19:27:51.418174  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1009 19:27:51.431794  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1009 19:27:51.445219  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1009 19:27:51.460138  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1009 19:27:51.473336  343307 ssh_runner.go:195] Run: openssl version
	I1009 19:27:51.480063  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 19:27:51.488916  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 19:27:51.493541  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 19:27:51.493662  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 19:27:51.535043  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:51.543247  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 19:27:51.552252  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 19:27:51.556439  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 19:27:51.556553  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 19:27:51.598587  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:51.607271  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:51.616125  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:51.620083  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:51.620175  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:51.664070  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:51.672785  343307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:51.676884  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:27:51.718930  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:27:51.761150  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:27:51.802284  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:27:51.843422  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:27:51.890388  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:27:51.931465  343307 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1009 19:27:51.931643  343307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-807463-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:51.931677  343307 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:51.931730  343307 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:51.945085  343307 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:51.945174  343307 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:51.945236  343307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:51.955208  343307 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:51.955321  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1009 19:27:51.963468  343307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 19:27:51.977048  343307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:51.990708  343307 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1009 19:27:52.008521  343307 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:52.012741  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:52.024091  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:52.162593  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:52.176738  343307 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:52.177297  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:52.180462  343307 out.go:179] * Verifying Kubernetes components...
	I1009 19:27:52.183354  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:52.328633  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:52.343053  343307 kapi.go:59] client config for ha-807463: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key", CAFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1009 19:27:52.343132  343307 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1009 19:27:52.343378  343307 node_ready.go:35] waiting up to 6m0s for node "ha-807463-m02" to be "Ready" ...
	I1009 19:28:12.417047  343307 node_ready.go:49] node "ha-807463-m02" is "Ready"
	I1009 19:28:12.417075  343307 node_ready.go:38] duration metric: took 20.07367073s for node "ha-807463-m02" to be "Ready" ...
	I1009 19:28:12.417087  343307 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:28:12.417171  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:12.917913  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:13.418163  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:13.917283  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:14.417776  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:14.441559  343307 api_server.go:72] duration metric: took 22.264725667s to wait for apiserver process to appear ...
	I1009 19:28:14.441582  343307 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:28:14.441601  343307 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1009 19:28:14.457402  343307 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1009 19:28:14.458648  343307 api_server.go:141] control plane version: v1.34.1
	I1009 19:28:14.458703  343307 api_server.go:131] duration metric: took 17.113274ms to wait for apiserver health ...
	I1009 19:28:14.458728  343307 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:28:14.470395  343307 system_pods.go:59] 26 kube-system pods found
	I1009 19:28:14.470439  343307 system_pods.go:61] "coredns-66bc5c9577-tswbs" [5837c6fe-278a-4b3a-98d1-79992fe9ea08] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:28:14.470449  343307 system_pods.go:61] "coredns-66bc5c9577-vkzgf" [80c50dd0-6a2c-4662-80d3-72f45754c3df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:28:14.470454  343307 system_pods.go:61] "etcd-ha-807463" [84964141-cf31-4652-9a3c-a9265edf4f8d] Running
	I1009 19:28:14.470459  343307 system_pods.go:61] "etcd-ha-807463-m02" [e91cfd04-5988-45ce-9dae-b204db6efe4e] Running
	I1009 19:28:14.470464  343307 system_pods.go:61] "etcd-ha-807463-m03" [26cd4bca-fd69-452f-b5a2-b9bbc5966ded] Running
	I1009 19:28:14.470473  343307 system_pods.go:61] "kindnet-bc8tf" [f003f127-5e25-434a-837b-d021fb0e3fa7] Running
	I1009 19:28:14.470477  343307 system_pods.go:61] "kindnet-dvwc7" [2a7512ff-e63c-4aa0-8b4e-fb241415067f] Running
	I1009 19:28:14.470483  343307 system_pods.go:61] "kindnet-gvpmq" [223d0c34-5384-4cd5-a0d2-842a422629ab] Running
	I1009 19:28:14.470488  343307 system_pods.go:61] "kindnet-rc46j" [22f58fe4-1d11-4259-b9f9-e8740b8b2257] Running
	I1009 19:28:14.470501  343307 system_pods.go:61] "kube-apiserver-ha-807463" [f6f353e4-8237-46db-a4a8-cd536448a79b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:28:14.470507  343307 system_pods.go:61] "kube-apiserver-ha-807463-m02" [3d8c0d4b-2cfb-4de6-8d9f-95e25e6f2a4e] Running
	I1009 19:28:14.470517  343307 system_pods.go:61] "kube-apiserver-ha-807463-m03" [a7b828f8-ab95-440a-b42e-e48d83bf3d20] Running
	I1009 19:28:14.470521  343307 system_pods.go:61] "kube-controller-manager-ha-807463" [e409b5f4-e73e-4270-bc1b-44b9a84123c7] Running
	I1009 19:28:14.470527  343307 system_pods.go:61] "kube-controller-manager-ha-807463-m02" [bce8c53d-0ba9-4e5f-93ca-06958824d9ba] Running
	I1009 19:28:14.470538  343307 system_pods.go:61] "kube-controller-manager-ha-807463-m03" [96d81c2f-668e-4729-aa2c-ab008af31ef1] Running
	I1009 19:28:14.470542  343307 system_pods.go:61] "kube-proxy-2lp2p" [cb605c64-8004-4f40-8e70-eb8e3184d3d6] Running
	I1009 19:28:14.470546  343307 system_pods.go:61] "kube-proxy-7lpbk" [d6ba71bf-d06d-4ade-b0e4-85303842110c] Running
	I1009 19:28:14.470550  343307 system_pods.go:61] "kube-proxy-b84dn" [9c10ee5e-8408-4b6f-985a-8d4f44a869cc] Running
	I1009 19:28:14.470555  343307 system_pods.go:61] "kube-proxy-vw7c5" [89df419c-841c-4a9c-af83-50e98327318d] Running
	I1009 19:28:14.470561  343307 system_pods.go:61] "kube-scheduler-ha-807463" [d577e200-00d6-4bac-aa67-0f7ef54c4d1a] Running
	I1009 19:28:14.470568  343307 system_pods.go:61] "kube-scheduler-ha-807463-m02" [848b94f3-79dc-44dc-8416-33c96451e0c0] Running
	I1009 19:28:14.470572  343307 system_pods.go:61] "kube-scheduler-ha-807463-m03" [f7153dac-0ede-40dc-b18c-1c03bebc8414] Running
	I1009 19:28:14.470578  343307 system_pods.go:61] "kube-vip-ha-807463" [f4f09ea9-0059-4cc4-9c0b-0ea2240a1885] Running
	I1009 19:28:14.470583  343307 system_pods.go:61] "kube-vip-ha-807463-m02" [98f28358-d9e9-4f8a-b407-b14baa34ea75] Running
	I1009 19:28:14.470589  343307 system_pods.go:61] "kube-vip-ha-807463-m03" [c150d4cd-1c28-4677-9a55-6e2d119daa81] Running
	I1009 19:28:14.470594  343307 system_pods.go:61] "storage-provisioner" [b9e8a81e-2bee-4542-b231-7490dfbf6065] Running
	I1009 19:28:14.470599  343307 system_pods.go:74] duration metric: took 11.85336ms to wait for pod list to return data ...
	I1009 19:28:14.470612  343307 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:28:14.482492  343307 default_sa.go:45] found service account: "default"
	I1009 19:28:14.482522  343307 default_sa.go:55] duration metric: took 11.902296ms for default service account to be created ...
	I1009 19:28:14.482532  343307 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:28:14.496415  343307 system_pods.go:86] 26 kube-system pods found
	I1009 19:28:14.496458  343307 system_pods.go:89] "coredns-66bc5c9577-tswbs" [5837c6fe-278a-4b3a-98d1-79992fe9ea08] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:28:14.496468  343307 system_pods.go:89] "coredns-66bc5c9577-vkzgf" [80c50dd0-6a2c-4662-80d3-72f45754c3df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:28:14.496475  343307 system_pods.go:89] "etcd-ha-807463" [84964141-cf31-4652-9a3c-a9265edf4f8d] Running
	I1009 19:28:14.496480  343307 system_pods.go:89] "etcd-ha-807463-m02" [e91cfd04-5988-45ce-9dae-b204db6efe4e] Running
	I1009 19:28:14.496484  343307 system_pods.go:89] "etcd-ha-807463-m03" [26cd4bca-fd69-452f-b5a2-b9bbc5966ded] Running
	I1009 19:28:14.496488  343307 system_pods.go:89] "kindnet-bc8tf" [f003f127-5e25-434a-837b-d021fb0e3fa7] Running
	I1009 19:28:14.496493  343307 system_pods.go:89] "kindnet-dvwc7" [2a7512ff-e63c-4aa0-8b4e-fb241415067f] Running
	I1009 19:28:14.496502  343307 system_pods.go:89] "kindnet-gvpmq" [223d0c34-5384-4cd5-a0d2-842a422629ab] Running
	I1009 19:28:14.496509  343307 system_pods.go:89] "kindnet-rc46j" [22f58fe4-1d11-4259-b9f9-e8740b8b2257] Running
	I1009 19:28:14.496517  343307 system_pods.go:89] "kube-apiserver-ha-807463" [f6f353e4-8237-46db-a4a8-cd536448a79b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:28:14.496523  343307 system_pods.go:89] "kube-apiserver-ha-807463-m02" [3d8c0d4b-2cfb-4de6-8d9f-95e25e6f2a4e] Running
	I1009 19:28:14.496534  343307 system_pods.go:89] "kube-apiserver-ha-807463-m03" [a7b828f8-ab95-440a-b42e-e48d83bf3d20] Running
	I1009 19:28:14.496539  343307 system_pods.go:89] "kube-controller-manager-ha-807463" [e409b5f4-e73e-4270-bc1b-44b9a84123c7] Running
	I1009 19:28:14.496544  343307 system_pods.go:89] "kube-controller-manager-ha-807463-m02" [bce8c53d-0ba9-4e5f-93ca-06958824d9ba] Running
	I1009 19:28:14.496553  343307 system_pods.go:89] "kube-controller-manager-ha-807463-m03" [96d81c2f-668e-4729-aa2c-ab008af31ef1] Running
	I1009 19:28:14.496557  343307 system_pods.go:89] "kube-proxy-2lp2p" [cb605c64-8004-4f40-8e70-eb8e3184d3d6] Running
	I1009 19:28:14.496561  343307 system_pods.go:89] "kube-proxy-7lpbk" [d6ba71bf-d06d-4ade-b0e4-85303842110c] Running
	I1009 19:28:14.496566  343307 system_pods.go:89] "kube-proxy-b84dn" [9c10ee5e-8408-4b6f-985a-8d4f44a869cc] Running
	I1009 19:28:14.496575  343307 system_pods.go:89] "kube-proxy-vw7c5" [89df419c-841c-4a9c-af83-50e98327318d] Running
	I1009 19:28:14.496579  343307 system_pods.go:89] "kube-scheduler-ha-807463" [d577e200-00d6-4bac-aa67-0f7ef54c4d1a] Running
	I1009 19:28:14.496583  343307 system_pods.go:89] "kube-scheduler-ha-807463-m02" [848b94f3-79dc-44dc-8416-33c96451e0c0] Running
	I1009 19:28:14.496587  343307 system_pods.go:89] "kube-scheduler-ha-807463-m03" [f7153dac-0ede-40dc-b18c-1c03bebc8414] Running
	I1009 19:28:14.496591  343307 system_pods.go:89] "kube-vip-ha-807463" [f4f09ea9-0059-4cc4-9c0b-0ea2240a1885] Running
	I1009 19:28:14.496597  343307 system_pods.go:89] "kube-vip-ha-807463-m02" [98f28358-d9e9-4f8a-b407-b14baa34ea75] Running
	I1009 19:28:14.496601  343307 system_pods.go:89] "kube-vip-ha-807463-m03" [c150d4cd-1c28-4677-9a55-6e2d119daa81] Running
	I1009 19:28:14.496609  343307 system_pods.go:89] "storage-provisioner" [b9e8a81e-2bee-4542-b231-7490dfbf6065] Running
	I1009 19:28:14.496616  343307 system_pods.go:126] duration metric: took 14.078508ms to wait for k8s-apps to be running ...
	I1009 19:28:14.496627  343307 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:28:14.496696  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:28:14.527254  343307 system_svc.go:56] duration metric: took 30.616666ms WaitForService to wait for kubelet
	I1009 19:28:14.527281  343307 kubeadm.go:586] duration metric: took 22.350452667s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:28:14.527300  343307 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:28:14.536047  343307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:28:14.536130  343307 node_conditions.go:123] node cpu capacity is 2
	I1009 19:28:14.536159  343307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:28:14.536184  343307 node_conditions.go:123] node cpu capacity is 2
	I1009 19:28:14.536225  343307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:28:14.536247  343307 node_conditions.go:123] node cpu capacity is 2
	I1009 19:28:14.536284  343307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:28:14.536308  343307 node_conditions.go:123] node cpu capacity is 2
	I1009 19:28:14.536330  343307 node_conditions.go:105] duration metric: took 9.020752ms to run NodePressure ...
	I1009 19:28:14.536373  343307 start.go:242] waiting for startup goroutines ...
	I1009 19:28:14.536414  343307 start.go:256] writing updated cluster config ...
	I1009 19:28:14.540247  343307 out.go:203] 
	I1009 19:28:14.543487  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:28:14.543686  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:28:14.547047  343307 out.go:179] * Starting "ha-807463-m03" control-plane node in "ha-807463" cluster
	I1009 19:28:14.550723  343307 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:28:14.553769  343307 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:28:14.556767  343307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:28:14.556832  343307 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:28:14.557073  343307 cache.go:58] Caching tarball of preloaded images
	I1009 19:28:14.557216  343307 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:28:14.557276  343307 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:28:14.557431  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:28:14.597092  343307 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:28:14.597123  343307 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:28:14.597144  343307 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:28:14.597168  343307 start.go:361] acquireMachinesLock for ha-807463-m03: {Name:mk0e43107ec0c9bc8c06da921397f514d91f61d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:28:14.597229  343307 start.go:365] duration metric: took 46.457µs to acquireMachinesLock for "ha-807463-m03"
	I1009 19:28:14.597250  343307 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:28:14.597255  343307 fix.go:55] fixHost starting: m03
	I1009 19:28:14.597512  343307 cli_runner.go:164] Run: docker container inspect ha-807463-m03 --format={{.State.Status}}
	I1009 19:28:14.632017  343307 fix.go:113] recreateIfNeeded on ha-807463-m03: state=Stopped err=<nil>
	W1009 19:28:14.632042  343307 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:28:14.635426  343307 out.go:252] * Restarting existing docker container for "ha-807463-m03" ...
	I1009 19:28:14.635514  343307 cli_runner.go:164] Run: docker start ha-807463-m03
	I1009 19:28:15.014352  343307 cli_runner.go:164] Run: docker container inspect ha-807463-m03 --format={{.State.Status}}
	I1009 19:28:15.044342  343307 kic.go:430] container "ha-807463-m03" state is running.
	I1009 19:28:15.044802  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m03
	I1009 19:28:15.084035  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:28:15.084294  343307 machine.go:93] provisionDockerMachine start ...
	I1009 19:28:15.084356  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:15.113499  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:28:15.113819  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33191 <nil> <nil>}
	I1009 19:28:15.113829  343307 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:28:15.114606  343307 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 19:28:18.387326  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463-m03
	
	I1009 19:28:18.387353  343307 ubuntu.go:182] provisioning hostname "ha-807463-m03"
	I1009 19:28:18.387421  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:18.414941  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:28:18.415269  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33191 <nil> <nil>}
	I1009 19:28:18.415288  343307 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-807463-m03 && echo "ha-807463-m03" | sudo tee /etc/hostname
	I1009 19:28:18.857505  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463-m03
	
	I1009 19:28:18.857586  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:18.886274  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:28:18.886587  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33191 <nil> <nil>}
	I1009 19:28:18.886603  343307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-807463-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-807463-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-807463-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:28:19.124493  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:28:19.124522  343307 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 19:28:19.124543  343307 ubuntu.go:190] setting up certificates
	I1009 19:28:19.124552  343307 provision.go:84] configureAuth start
	I1009 19:28:19.124639  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m03
	I1009 19:28:19.150744  343307 provision.go:143] copyHostCerts
	I1009 19:28:19.150791  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:28:19.150823  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 19:28:19.150839  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:28:19.150921  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 19:28:19.151006  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:28:19.151029  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 19:28:19.151037  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:28:19.151079  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 19:28:19.151132  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:28:19.151154  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 19:28:19.151159  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:28:19.151184  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 19:28:19.151236  343307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.ha-807463-m03 san=[127.0.0.1 192.168.49.4 ha-807463-m03 localhost minikube]
	I1009 19:28:20.594319  343307 provision.go:177] copyRemoteCerts
	I1009 19:28:20.594391  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:28:20.594445  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:20.617127  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:20.793603  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:28:20.793667  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:28:20.838358  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:28:20.838425  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:28:20.897009  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:28:20.897076  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:28:20.947823  343307 provision.go:87] duration metric: took 1.823247487s to configureAuth
	I1009 19:28:20.947854  343307 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:28:20.948102  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:28:20.948220  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:20.980853  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:28:20.981192  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33191 <nil> <nil>}
	I1009 19:28:20.981215  343307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:28:21.547892  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:28:21.547940  343307 machine.go:96] duration metric: took 6.463636002s to provisionDockerMachine
	I1009 19:28:21.547953  343307 start.go:294] postStartSetup for "ha-807463-m03" (driver="docker")
	I1009 19:28:21.547963  343307 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:28:21.548058  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:28:21.548103  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:21.574619  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:21.688699  343307 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:28:21.693344  343307 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:28:21.693371  343307 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:28:21.693382  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 19:28:21.693440  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 19:28:21.693513  343307 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 19:28:21.693520  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /etc/ssl/certs/2960022.pem
	I1009 19:28:21.693621  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:28:21.703022  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:28:21.726060  343307 start.go:297] duration metric: took 178.090392ms for postStartSetup
	I1009 19:28:21.726183  343307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:28:21.726252  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:21.754232  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:21.887060  343307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:28:21.902692  343307 fix.go:57] duration metric: took 7.305428838s for fixHost
	I1009 19:28:21.902721  343307 start.go:84] releasing machines lock for "ha-807463-m03", held for 7.305481549s
	I1009 19:28:21.902791  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m03
	I1009 19:28:21.935444  343307 out.go:179] * Found network options:
	I1009 19:28:21.938464  343307 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1009 19:28:21.941326  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	W1009 19:28:21.941366  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	W1009 19:28:21.941390  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	W1009 19:28:21.941399  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	I1009 19:28:21.941489  343307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:28:21.941533  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:21.941553  343307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:28:21.941612  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:21.971654  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:21.991268  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:22.521550  343307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:28:22.531247  343307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:28:22.531361  343307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:28:22.554768  343307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:28:22.554843  343307 start.go:496] detecting cgroup driver to use...
	I1009 19:28:22.554892  343307 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:28:22.554962  343307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:28:22.583220  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:28:22.599310  343307 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:28:22.599403  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:28:22.632291  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:28:22.653641  343307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:28:23.037548  343307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:28:23.288869  343307 docker.go:234] disabling docker service ...
	I1009 19:28:23.288983  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:28:23.316355  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:28:23.341879  343307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:28:23.636459  343307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:28:23.958882  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:28:24.002025  343307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:28:24.060081  343307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:28:24.060153  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.094554  343307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:28:24.094632  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.113879  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.124444  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.134135  343307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:28:24.153071  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.164683  343307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.175420  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.185724  343307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:28:24.196010  343307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:28:24.206389  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:28:24.403396  343307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:29:54.625257  343307 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.2217701s)
	I1009 19:29:54.625289  343307 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:29:54.625347  343307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:29:54.629422  343307 start.go:564] Will wait 60s for crictl version
	I1009 19:29:54.629487  343307 ssh_runner.go:195] Run: which crictl
	I1009 19:29:54.633348  343307 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:29:54.664178  343307 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:29:54.664263  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:29:54.695047  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:29:54.726968  343307 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:29:54.729882  343307 out.go:179]   - env NO_PROXY=192.168.49.2
	I1009 19:29:54.732783  343307 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1009 19:29:54.735745  343307 cli_runner.go:164] Run: docker network inspect ha-807463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:29:54.754488  343307 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:29:54.758549  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:29:54.769025  343307 mustload.go:65] Loading cluster: ha-807463
	I1009 19:29:54.769312  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:29:54.769581  343307 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:29:54.789308  343307 host.go:66] Checking if "ha-807463" exists ...
	I1009 19:29:54.789631  343307 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463 for IP: 192.168.49.4
	I1009 19:29:54.789648  343307 certs.go:195] generating shared ca certs ...
	I1009 19:29:54.789665  343307 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:29:54.789790  343307 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 19:29:54.789840  343307 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 19:29:54.789852  343307 certs.go:257] generating profile certs ...
	I1009 19:29:54.789935  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key
	I1009 19:29:54.790005  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.8f59bad3
	I1009 19:29:54.790050  343307 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key
	I1009 19:29:54.790063  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:29:54.790075  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:29:54.790096  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:29:54.790112  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:29:54.790124  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:29:54.790141  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:29:54.790152  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:29:54.790162  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:29:54.790217  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 19:29:54.790247  343307 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 19:29:54.790255  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:29:54.790279  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:29:54.790304  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:29:54.790325  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 19:29:54.790366  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:29:54.790392  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /usr/share/ca-certificates/2960022.pem
	I1009 19:29:54.790404  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:29:54.790415  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem -> /usr/share/ca-certificates/296002.pem
	I1009 19:29:54.790566  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:29:54.807723  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:29:54.905478  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1009 19:29:54.915115  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1009 19:29:54.924123  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1009 19:29:54.927867  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1009 19:29:54.936366  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1009 19:29:54.940038  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1009 19:29:54.948153  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1009 19:29:54.952558  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1009 19:29:54.962178  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1009 19:29:54.966425  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1009 19:29:54.974761  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1009 19:29:54.978501  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1009 19:29:54.987786  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:29:55.037480  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:29:55.060963  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:29:55.082145  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:29:55.105188  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 19:29:55.128516  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:29:55.149252  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:29:55.172354  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:29:55.193857  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 19:29:55.219080  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:29:55.237634  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 19:29:55.256720  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1009 19:29:55.279349  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1009 19:29:55.298083  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1009 19:29:55.312857  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1009 19:29:55.328467  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1009 19:29:55.343367  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1009 19:29:55.357598  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1009 19:29:55.374321  343307 ssh_runner.go:195] Run: openssl version
	I1009 19:29:55.380839  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 19:29:55.389522  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 19:29:55.394545  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 19:29:55.394618  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 19:29:55.437345  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:29:55.447436  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:29:55.456198  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:29:55.460194  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:29:55.460288  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:29:55.502457  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:29:55.511155  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 19:29:55.519603  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 19:29:55.523571  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 19:29:55.523682  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 19:29:55.565661  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 19:29:55.575332  343307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:29:55.579545  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:29:55.620938  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:29:55.663052  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:29:55.708075  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:29:55.749078  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:29:55.800791  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:29:55.844259  343307 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1009 19:29:55.844433  343307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-807463-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:29:55.844463  343307 kube-vip.go:115] generating kube-vip config ...
	I1009 19:29:55.844514  343307 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:29:55.857076  343307 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:29:55.857168  343307 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:29:55.857232  343307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:29:55.865620  343307 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:29:55.865690  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1009 19:29:55.873976  343307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 19:29:55.888496  343307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:29:55.902132  343307 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1009 19:29:55.918614  343307 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:29:55.922408  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:29:55.932872  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:29:56.078754  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:29:56.098490  343307 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:29:56.098835  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:29:56.102467  343307 out.go:179] * Verifying Kubernetes components...
	I1009 19:29:56.105295  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:29:56.244415  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:29:56.260645  343307 kapi.go:59] client config for ha-807463: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key", CAFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1009 19:29:56.260766  343307 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1009 19:29:56.261025  343307 node_ready.go:35] waiting up to 6m0s for node "ha-807463-m03" to be "Ready" ...
	W1009 19:29:58.265441  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:00.338043  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:02.766376  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:05.271576  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:07.765013  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:09.766174  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:12.268909  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:14.764872  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:16.768216  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:19.265861  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:21.764655  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:23.765433  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:26.265822  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:28.267509  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:30.765442  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:33.266200  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:35.765625  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:38.265302  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:40.265407  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:42.270313  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:44.765053  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:47.264227  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:49.264310  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:51.264693  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:53.266262  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:55.765430  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:57.765657  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:00.296961  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:02.765162  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:05.265758  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:07.270661  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:09.764829  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:11.766346  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:14.265615  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:16.765212  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:19.264362  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:21.265737  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:23.765070  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:26.265524  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:28.764786  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:30.765098  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:33.265489  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:35.270526  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:37.764838  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:40.265487  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:42.765053  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:45.269843  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:47.765589  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:49.766098  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:52.274275  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:54.765171  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:57.265540  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:59.265763  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:01.270860  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:03.765024  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:06.265424  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:08.766290  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:10.766762  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:13.264661  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:15.265789  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:17.765441  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:19.765504  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:22.269835  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:24.764880  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:26.764993  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:28.765201  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:30.765672  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:33.269831  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:35.271203  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:37.764975  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:39.765423  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:42.271235  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:44.765366  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:47.264895  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:49.267101  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:51.764961  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:53.765546  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:55.765910  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:58.272156  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:00.765521  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:03.265015  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:05.265319  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:07.764930  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:09.765819  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:12.270731  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:14.764917  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:16.765423  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:19.265783  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:21.268655  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:23.764590  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:25.765798  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:28.266110  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:30.765102  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:33.272016  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:35.765481  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:38.266269  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:40.268920  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:42.764575  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:44.765157  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:47.271446  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:49.764820  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:51.765204  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:54.271178  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:56.765244  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:59.264746  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:01.265757  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:03.266309  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:05.765330  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:08.271832  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:10.764901  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:13.271000  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:15.764750  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:18.271187  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:20.764309  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:22.764554  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:24.765015  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:27.265491  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:29.269747  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:31.765383  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:34.265977  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:36.271158  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:38.764726  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:41.269997  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:43.765647  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:46.264806  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:48.264841  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:50.265171  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:52.273405  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:54.764904  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:56.772617  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:59.264570  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:01.266121  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:03.764578  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:05.765062  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:07.765743  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:10.264753  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:12.267514  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:14.271366  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:16.764238  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:18.764646  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:21.264582  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:23.765647  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:26.265493  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:28.765534  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:31.266108  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:33.271209  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:35.765495  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:38.264544  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:40.265777  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:42.765010  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:45.320159  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:47.765477  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:50.267171  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:52.764971  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:54.765424  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	I1009 19:35:56.261403  343307 node_ready.go:38] duration metric: took 6m0.00032425s for node "ha-807463-m03" to be "Ready" ...
	I1009 19:35:56.264406  343307 out.go:203] 
	W1009 19:35:56.267318  343307 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:35:56.267354  343307 out.go:285] * 
	W1009 19:35:56.269757  343307 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:35:56.272075  343307 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:28:13 ha-807463 crio[664]: time="2025-10-09T19:28:13.304067783Z" level=info msg="Started container" PID=1189 containerID=0e94c30541006adea7a9cf430df1905830797b4065898a1ff96a0a8704efcde5 description=kube-system/coredns-66bc5c9577-tswbs/coredns id=bd231ca3-3cb5-417c-a27f-e7e210bd2614 name=/runtime.v1.RuntimeService/StartContainer sandboxID=215954c6e5b58ec4e1876606af4120f74fa1b735788f97d908b617d088e10218
	Oct 09 19:28:43 ha-807463 conmon[1165]: conmon 49b67bb8cba0ee99aca2 <ninfo>: container 1170 exited with status 1
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.956113094Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b2497cc7-982c-4437-8e10-8451b3daa825 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.957275756Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ea766a4c-b850-4d02-b94c-15910e120466 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.958652007Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=441b7fe6-c8e8-4480-a875-e58f7cbbc12c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.958885919Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.970702736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.970987881Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/934ecdd5b34159ac9e9805425bf47a7191ad8753b0f07efbbd463b24fea61539/merged/etc/passwd: no such file or directory"
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.971020948Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/934ecdd5b34159ac9e9805425bf47a7191ad8753b0f07efbbd463b24fea61539/merged/etc/group: no such file or directory"
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.971300818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.999493974Z" level=info msg="Created container 1416e569d8f8fe0cb15febba45212fdd6fb1718a9812f18587def66caefda3e1: kube-system/storage-provisioner/storage-provisioner" id=441b7fe6-c8e8-4480-a875-e58f7cbbc12c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:28:44 ha-807463 crio[664]: time="2025-10-09T19:28:44.001575564Z" level=info msg="Starting container: 1416e569d8f8fe0cb15febba45212fdd6fb1718a9812f18587def66caefda3e1" id=2fe204ab-fca6-41e1-b709-a74e76e04d48 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:28:44 ha-807463 crio[664]: time="2025-10-09T19:28:44.008428394Z" level=info msg="Started container" PID=1408 containerID=1416e569d8f8fe0cb15febba45212fdd6fb1718a9812f18587def66caefda3e1 description=kube-system/storage-provisioner/storage-provisioner id=2fe204ab-fca6-41e1-b709-a74e76e04d48 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7eb46fc741382f55fe16d9dcb41b62c8d30783b6fa783d2d33a2516785da8030
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.522067171Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.525680672Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.525854736Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.525929903Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.529201175Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.52923686Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.529253697Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.534099464Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.534352454Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.534487904Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.538988916Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.539025699Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	1416e569d8f8f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   7eb46fc741382       storage-provisioner                 kube-system
	0e94c30541006       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   215954c6e5b58       coredns-66bc5c9577-tswbs            kube-system
	9adc2cdd19000       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   55085f7167d14       kindnet-rc46j                       kube-system
	49b67bb8cba0e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   7eb46fc741382       storage-provisioner                 kube-system
	dc6736e2d83ca       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   7 minutes ago       Running             kube-vip                  1                   e1b7344c7d94c       kube-vip-ha-807463                  kube-system
	ca7bc93dc4dcf       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   833e6871e62e2       coredns-66bc5c9577-vkzgf            kube-system
	38276ddd00795       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   3daf554657528       busybox-7b57f96db7-5z2cl            default
	9f1fd2b441bae       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   77fe5d534a437       kube-proxy-b84dn                    kube-system
	71e4e3ae2d80c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Running             kube-controller-manager   2                   5d2bd7a9c54dd       kube-controller-manager-ha-807463   kube-system
	9d475a483e702       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            1                   4ee70f1fb5f58       kube-apiserver-ha-807463            kube-system
	eb3eb3edb2fff       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   1                   5d2bd7a9c54dd       kube-controller-manager-ha-807463   kube-system
	e4593fb70e6dd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   2d270c8563e10       kube-scheduler-ha-807463            kube-system
	60abd5bf9ea13       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Exited              kube-vip                  0                   e1b7344c7d94c       kube-vip-ha-807463                  kube-system
	4477522bd8536       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   a372bed836bce       etcd-ha-807463                      kube-system
	
	
	==> coredns [0e94c30541006adea7a9cf430df1905830797b4065898a1ff96a0a8704efcde5] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35674 - 64583 "HINFO IN 6906546124599759769.4081405551742000183. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033625487s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [ca7bc93dc4dcf853db34af69a749d22d607d653f5e3ef5777c55ac602fd2a298] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56146 - 65116 "HINFO IN 9083642706827740027.9059612721108159707. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020353957s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-807463
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-807463
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=ha-807463
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_22_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:22:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-807463
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:36:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:34:40 +0000   Thu, 09 Oct 2025 19:22:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:34:40 +0000   Thu, 09 Oct 2025 19:22:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:34:40 +0000   Thu, 09 Oct 2025 19:22:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:34:40 +0000   Thu, 09 Oct 2025 19:22:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-807463
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 97aacaba546643c3a96be1e87893b40c
	  System UUID:                97caddd7-ad20-4ad3-87a9-90a149a84db2
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-5z2cl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-tswbs             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-vkzgf             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-807463                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-rc46j                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-807463             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-807463    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-b84dn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-807463             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-807463                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m55s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m53s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)      kubelet          Node ha-807463 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-807463 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-807463 status is now: NodeHasSufficientMemory
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-807463 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-807463 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-807463 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-807463 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	  Normal   RegisteredNode           8m51s                  node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	  Normal   Starting                 8m29s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m29s (x8 over 8m29s)  kubelet          Node ha-807463 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m29s (x8 over 8m29s)  kubelet          Node ha-807463 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m29s (x8 over 8m29s)  kubelet          Node ha-807463 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m52s                  node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	  Normal   RegisteredNode           7m42s                  node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	
	
	Name:               ha-807463-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-807463-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=ha-807463
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_09T19_23_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:23:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-807463-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:35:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:33:46 +0000   Thu, 09 Oct 2025 19:23:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:33:46 +0000   Thu, 09 Oct 2025 19:23:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:33:46 +0000   Thu, 09 Oct 2025 19:23:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:33:46 +0000   Thu, 09 Oct 2025 19:23:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-807463-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 615ff68d05a240648cf06e5cd58bdb14
	  System UUID:                4a17c7be-c74f-481f-8bf2-76a62cd3a90f
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-xqc7g                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-807463-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-gvpmq                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-807463-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-807463-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-7lpbk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-807463-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-807463-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m44s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   RegisteredNode           12m                    node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	  Normal   NodeHasSufficientPID     9m31s (x8 over 9m31s)  kubelet          Node ha-807463-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m31s (x8 over 9m31s)  kubelet          Node ha-807463-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m31s (x8 over 9m31s)  kubelet          Node ha-807463-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           8m51s                  node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	  Normal   Starting                 8m26s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m25s (x8 over 8m26s)  kubelet          Node ha-807463-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m25s (x8 over 8m26s)  kubelet          Node ha-807463-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m25s (x8 over 8m26s)  kubelet          Node ha-807463-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m52s                  node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	  Normal   RegisteredNode           7m42s                  node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	
	
	Name:               ha-807463-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-807463-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=ha-807463
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_09T19_25_45_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:25:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-807463-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:26:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 09 Oct 2025 19:25:58 +0000   Thu, 09 Oct 2025 19:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 09 Oct 2025 19:25:58 +0000   Thu, 09 Oct 2025 19:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 09 Oct 2025 19:25:58 +0000   Thu, 09 Oct 2025 19:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 09 Oct 2025 19:25:58 +0000   Thu, 09 Oct 2025 19:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-807463-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc067731848740afab5ce03812f74006
	  System UUID:                0f2358b6-a095-45f9-8a33-badc490163a8
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bc8tf       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-2lp2p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-807463-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-807463-m04 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-807463-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-807463-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m51s              node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   RegisteredNode           7m52s              node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   RegisteredNode           7m42s              node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   NodeNotReady             7m2s               node-controller  Node ha-807463-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct 9 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015195] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.531968] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036847] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.757016] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.932356] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 9 18:02] hrtimer: interrupt took 20603549 ns
	[Oct 9 18:59] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 9 19:02] overlayfs: idmapped layers are currently not supported
	[  +0.066862] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 9 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:23] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:25] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[  +3.297009] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:28] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4477522bd8536fe09afcc2397cd8beb927ccd19a6714098fb7bb1f3ef47595ea] <==
	{"level":"warn","ts":"2025-10-09T19:35:49.704512Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:49.704562Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:49.739178Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"95a22811bdce1330","rtt":"154.736798ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:49.746917Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"95a22811bdce1330","rtt":"173.409777ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:53.706247Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:53.706306Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:54.739992Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"95a22811bdce1330","rtt":"154.736798ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:54.748135Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"95a22811bdce1330","rtt":"173.409777ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:57.707713Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:57.707776Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:59.740166Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"95a22811bdce1330","rtt":"154.736798ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:59.749411Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"95a22811bdce1330","rtt":"173.409777ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:36:00.169034Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.228504ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-807463-m03\" limit:1 ","response":"range_response_count:1 size:6578"}
	{"level":"info","ts":"2025-10-09T19:36:00.169138Z","caller":"traceutil/trace.go:172","msg":"trace[1509218737] range","detail":"{range_begin:/registry/minions/ha-807463-m03; range_end:; response_count:1; response_revision:2994; }","duration":"141.331226ms","start":"2025-10-09T19:36:00.027768Z","end":"2025-10-09T19:36:00.169100Z","steps":["trace[1509218737] 'agreement among raft nodes before linearized reading'  (duration: 38.564143ms)","trace[1509218737] 'range keys from bolt db'  (duration: 102.603535ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-09T19:36:00.944560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:44124","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-09T19:36:01.023009Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12478773398777683558 12593026477526642892)"}
	{"level":"info","ts":"2025-10-09T19:36:01.027817Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"95a22811bdce1330","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-10-09T19:36:01.027877Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"95a22811bdce1330"}
	{"level":"info","ts":"2025-10-09T19:36:01.027910Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"95a22811bdce1330"}
	{"level":"info","ts":"2025-10-09T19:36:01.027932Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"95a22811bdce1330"}
	{"level":"info","ts":"2025-10-09T19:36:01.027948Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"95a22811bdce1330"}
	{"level":"info","ts":"2025-10-09T19:36:01.027981Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"95a22811bdce1330"}
	{"level":"info","ts":"2025-10-09T19:36:01.027997Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"95a22811bdce1330"}
	{"level":"info","ts":"2025-10-09T19:36:01.028004Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"95a22811bdce1330"}
	{"level":"info","ts":"2025-10-09T19:36:01.028015Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"95a22811bdce1330"}
	
	
	==> kernel <==
	 19:36:07 up  2:18,  0 user,  load average: 1.07, 1.30, 1.56
	Linux ha-807463 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9adc2cdd19000926b9c7696c7b7924afabffb77a3346b0bea81bc99d3f74aa0f] <==
	I1009 19:35:33.521332       1 main.go:324] Node ha-807463-m03 has CIDR [10.244.2.0/24] 
	I1009 19:35:33.521390       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1009 19:35:33.521401       1 main.go:324] Node ha-807463-m04 has CIDR [10.244.3.0/24] 
	I1009 19:35:43.527640       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:35:43.527674       1 main.go:301] handling current node
	I1009 19:35:43.527690       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1009 19:35:43.527696       1 main.go:324] Node ha-807463-m02 has CIDR [10.244.1.0/24] 
	I1009 19:35:43.527890       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1009 19:35:43.527932       1 main.go:324] Node ha-807463-m03 has CIDR [10.244.2.0/24] 
	I1009 19:35:43.528029       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1009 19:35:43.528041       1 main.go:324] Node ha-807463-m04 has CIDR [10.244.3.0/24] 
	I1009 19:35:53.521146       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:35:53.521182       1 main.go:301] handling current node
	I1009 19:35:53.521198       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1009 19:35:53.521204       1 main.go:324] Node ha-807463-m02 has CIDR [10.244.1.0/24] 
	I1009 19:35:53.521367       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1009 19:35:53.521387       1 main.go:324] Node ha-807463-m03 has CIDR [10.244.2.0/24] 
	I1009 19:35:53.521449       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1009 19:35:53.521462       1 main.go:324] Node ha-807463-m04 has CIDR [10.244.3.0/24] 
	I1009 19:36:03.521444       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1009 19:36:03.521477       1 main.go:324] Node ha-807463-m04 has CIDR [10.244.3.0/24] 
	I1009 19:36:03.521843       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:36:03.521923       1 main.go:301] handling current node
	I1009 19:36:03.521942       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1009 19:36:03.521948       1 main.go:324] Node ha-807463-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [9d475a483e7023b214d8a1506f2ba793d2cb34e4e0e7b5f0fc49d91b875116f7] <==
	E1009 19:28:12.347613       1 watcher.go:335] watch chan error: etcdserver: no leader
	E1009 19:28:12.347635       1 watcher.go:335] watch chan error: etcdserver: no leader
	E1009 19:28:12.349427       1 watcher.go:335] watch chan error: etcdserver: no leader
	E1009 19:28:12.349481       1 watcher.go:335] watch chan error: etcdserver: no leader
	{"level":"warn","ts":"2025-10-09T19:28:12.355259Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355532Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b2d2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355680Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b2d2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355754Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355815Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400046cb40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355846Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400126d680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355882Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355910Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000e925a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355975Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001959680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.357750Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001959680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.357862Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400126cb40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	E1009 19:28:12.360026       1 watcher.go:335] watch chan error: etcdserver: no leader
	E1009 19:28:12.360254       1 watcher.go:335] watch chan error: etcdserver: no leader
	{"level":"warn","ts":"2025-10-09T19:28:12.373075Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.373191Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.373230Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	I1009 19:28:12.408675       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1009 19:28:13.946618       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3 192.168.49.4]
	I1009 19:28:15.990769       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 19:28:16.088830       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:28:22.340287       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [71e4e3ae2d80c0bff2e415aa94adbf172f0541a980a58bc060eaf4114ebfa411] <==
	I1009 19:28:15.790006       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-807463-m04"
	I1009 19:28:15.790043       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-807463"
	I1009 19:28:15.790073       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-807463-m02"
	I1009 19:28:15.792034       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 19:28:15.792087       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1009 19:28:15.816012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 19:28:15.816096       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 19:28:15.816262       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:28:15.816289       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 19:28:15.816340       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1009 19:28:15.816365       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 19:28:15.853279       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 19:28:15.853454       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1009 19:28:15.853525       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1009 19:28:15.853569       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1009 19:28:15.900161       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:28:15.900708       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:28:15.900757       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:28:15.900945       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:28:45.936143       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-f6lp8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-f6lp8\": the object has been modified; please apply your changes to the latest version and try again"
	I1009 19:28:45.936767       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"d7915f4a-fefa-4618-a648-059d33b61abc", APIVersion:"v1", ResourceVersion:"291", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-f6lp8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-f6lp8": the object has been modified; please apply your changes to the latest version and try again
	I1009 19:34:15.936504       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-99qlt"
	E1009 19:34:16.137977       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1009 19:36:01.577191       1 garbagecollector.go:360] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-807463-m03\", UID:\"ff3d2082-0b19-486f-bf15-ebb70544cffc\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{_:sync.noC
opy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-807463-m03\", UID:\"ee3912b8-8841-45c0-9a4d-6e7b3ad8f5ce\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-807463-m03\" not found" logger="UnhandledError"
	E1009 19:36:01.621414       1 garbagecollector.go:360] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"storage.k8s.io/v1\", Kind:\"CSINode\", Name:\"ha-807463-m03\", UID:\"b4065252-cfe5-42ae-b4c2-b21091f1a081\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mut
ex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-807463-m03\", UID:\"ee3912b8-8841-45c0-9a4d-6e7b3ad8f5ce\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io \"ha-807463-m03\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [eb3eb3edb2fff30f90b98210a15c7960a0d8f4700c380a4bc2a236e3530d4043] <==
	I1009 19:27:40.800035       1 serving.go:386] Generated self-signed cert in-memory
	I1009 19:27:45.392772       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1009 19:27:45.392919       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:27:45.408597       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1009 19:27:45.408878       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:27:45.409007       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1009 19:27:45.409053       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1009 19:28:00.394482       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the reques
t from succeeding"
	
	
	==> kube-proxy [9f1fd2b441bae8a1e1677da06354cd58eb9120cf79ae41fd89aade0d9e36317b] <==
	I1009 19:28:13.524866       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:28:13.683998       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:28:13.785200       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:28:13.785297       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1009 19:28:13.785401       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:28:13.850524       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:28:13.850775       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:28:13.858532       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:28:13.859447       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:28:13.859472       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:28:13.869614       1 config.go:200] "Starting service config controller"
	I1009 19:28:13.869702       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:28:13.869759       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:28:13.869806       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:28:13.869854       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:28:13.869903       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:28:13.870681       1 config.go:309] "Starting node config controller"
	I1009 19:28:13.870751       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:28:13.870783       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:28:13.977741       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:28:13.979480       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:28:13.979510       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e4593fb70e6dd0047bc83f89897d4c1ad23896e5ca9a3628c4bbeea360f8cbaf] <==
	E1009 19:27:48.441390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 19:27:48.441455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 19:27:48.441529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 19:27:48.441597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 19:27:48.441717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 19:27:48.441800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 19:27:48.441887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1009 19:27:48.441935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 19:27:49.269919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 19:27:49.288585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 19:27:49.311114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 19:27:49.371959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1009 19:27:49.404581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 19:27:49.410730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 19:27:49.410883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 19:27:49.418641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 19:27:49.443744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1009 19:27:49.470207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 19:27:49.520778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1009 19:27:49.544432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 19:27:49.566871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 19:27:49.622487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1009 19:27:49.659599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 19:27:49.667074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1009 19:27:51.424577       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.615218     800 apiserver.go:52] "Watching apiserver"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.620110     800 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-807463" podUID="2851b5b6-b28e-4749-8fba-920501dc7be3"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.622751     800 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663228     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b9e8a81e-2bee-4542-b231-7490dfbf6065-tmp\") pod \"storage-provisioner\" (UID: \"b9e8a81e-2bee-4542-b231-7490dfbf6065\") " pod="kube-system/storage-provisioner"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663304     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c10ee5e-8408-4b6f-985a-8d4f44a869cc-xtables-lock\") pod \"kube-proxy-b84dn\" (UID: \"9c10ee5e-8408-4b6f-985a-8d4f44a869cc\") " pod="kube-system/kube-proxy-b84dn"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663360     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/22f58fe4-1d11-4259-b9f9-e8740b8b2257-cni-cfg\") pod \"kindnet-rc46j\" (UID: \"22f58fe4-1d11-4259-b9f9-e8740b8b2257\") " pod="kube-system/kindnet-rc46j"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663389     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c10ee5e-8408-4b6f-985a-8d4f44a869cc-lib-modules\") pod \"kube-proxy-b84dn\" (UID: \"9c10ee5e-8408-4b6f-985a-8d4f44a869cc\") " pod="kube-system/kube-proxy-b84dn"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663421     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22f58fe4-1d11-4259-b9f9-e8740b8b2257-xtables-lock\") pod \"kindnet-rc46j\" (UID: \"22f58fe4-1d11-4259-b9f9-e8740b8b2257\") " pod="kube-system/kindnet-rc46j"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663440     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22f58fe4-1d11-4259-b9f9-e8740b8b2257-lib-modules\") pod \"kindnet-rc46j\" (UID: \"22f58fe4-1d11-4259-b9f9-e8740b8b2257\") " pod="kube-system/kindnet-rc46j"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.667816     800 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="976e04e1cbea4b516ead31d4a83e047c" path="/var/lib/kubelet/pods/976e04e1cbea4b516ead31d4a83e047c/volumes"
	Oct 09 19:28:00 ha-807463 kubelet[800]: I1009 19:28:00.774505     800 scope.go:117] "RemoveContainer" containerID="eb3eb3edb2fff30f90b98210a15c7960a0d8f4700c380a4bc2a236e3530d4043"
	Oct 09 19:28:10 ha-807463 kubelet[800]: E1009 19:28:10.305261     800 controller.go:195] "Failed to update lease" err="Put \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-807463?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Oct 09 19:28:10 ha-807463 kubelet[800]: E1009 19:28:10.446000     800 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-09T19:28:00Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-09T19:28:00Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-09T19:28:00Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-09T19:28:00Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"re
cursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"ha-807463\": Patch \"https://192.168.49.2:8443/api/v1/nodes/ha-807463/status?timeout=10s\": context deadline exceeded"
	Oct 09 19:28:12 ha-807463 kubelet[800]: I1009 19:28:12.468697     800 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 09 19:28:12 ha-807463 kubelet[800]: I1009 19:28:12.552182     800 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-807463"
	Oct 09 19:28:12 ha-807463 kubelet[800]: I1009 19:28:12.552222     800 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-807463"
	Oct 09 19:28:12 ha-807463 kubelet[800]: W1009 19:28:12.667154     800 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/crio-3daf554657528d08ab602a2eafcc6211b760b3734a78136296b70f4b7a32baf0 WatchSource:0}: Error finding container 3daf554657528d08ab602a2eafcc6211b760b3734a78136296b70f4b7a32baf0: Status 404 returned error can't find the container with id 3daf554657528d08ab602a2eafcc6211b760b3734a78136296b70f4b7a32baf0
	Oct 09 19:28:12 ha-807463 kubelet[800]: W1009 19:28:12.708992     800 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/crio-833e6871e62e2720786472951e1248b710ee0b6ab3e58c51a072c96c41234008 WatchSource:0}: Error finding container 833e6871e62e2720786472951e1248b710ee0b6ab3e58c51a072c96c41234008: Status 404 returned error can't find the container with id 833e6871e62e2720786472951e1248b710ee0b6ab3e58c51a072c96c41234008
	Oct 09 19:28:12 ha-807463 kubelet[800]: I1009 19:28:12.824883     800 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-807463" podUID="2851b5b6-b28e-4749-8fba-920501dc7be3"
	Oct 09 19:28:12 ha-807463 kubelet[800]: I1009 19:28:12.854312     800 scope.go:117] "RemoveContainer" containerID="60abd5bf9ea13b7e15b4cb133643cb620ae0f536d45d6ac30703be2e3ef7a45f"
	Oct 09 19:28:13 ha-807463 kubelet[800]: W1009 19:28:13.100847     800 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/crio-215954c6e5b58ec4e1876606af4120f74fa1b735788f97d908b617d088e10218 WatchSource:0}: Error finding container 215954c6e5b58ec4e1876606af4120f74fa1b735788f97d908b617d088e10218: Status 404 returned error can't find the container with id 215954c6e5b58ec4e1876606af4120f74fa1b735788f97d908b617d088e10218
	Oct 09 19:28:13 ha-807463 kubelet[800]: I1009 19:28:13.258189     800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-807463" podStartSLOduration=1.258171868 podStartE2EDuration="1.258171868s" podCreationTimestamp="2025-10-09 19:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:28:13.202642686 +0000 UTC m=+34.717581260" watchObservedRunningTime="2025-10-09 19:28:13.258171868 +0000 UTC m=+34.773110434"
	Oct 09 19:28:38 ha-807463 kubelet[800]: E1009 19:28:38.614610     800 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa\": container with ID starting with 75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa not found: ID does not exist" containerID="75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa"
	Oct 09 19:28:38 ha-807463 kubelet[800]: I1009 19:28:38.614682     800 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa" err="rpc error: code = NotFound desc = could not find container \"75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa\": container with ID starting with 75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa not found: ID does not exist"
	Oct 09 19:28:43 ha-807463 kubelet[800]: I1009 19:28:43.955424     800 scope.go:117] "RemoveContainer" containerID="49b67bb8cba0ee99aca2811ac91734a84329f896cb75fab3ad456d53105ce0a1"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-807463 -n ha-807463
helpers_test.go:269: (dbg) Run:  kubectl --context ha-807463 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-hm827
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-807463 describe pod busybox-7b57f96db7-hm827
helpers_test.go:290: (dbg) kubectl --context ha-807463 describe pod busybox-7b57f96db7-hm827:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-hm827
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d8g9g (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-d8g9g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  112s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  112s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (9.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-807463" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-807463\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-807463\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-807463\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\
"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"Sta
ticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-807463
helpers_test.go:243: (dbg) docker inspect ha-807463:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6",
	        "Created": "2025-10-09T19:22:12.218448558Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 343436,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:27:31.498701729Z",
	            "FinishedAt": "2025-10-09T19:27:30.881285461Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/hostname",
	        "HostsPath": "/var/lib/docker/containers/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/hosts",
	        "LogPath": "/var/lib/docker/containers/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6-json.log",
	        "Name": "/ha-807463",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-807463:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-807463",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6",
	                "LowerDir": "/var/lib/docker/overlay2/501f3dc17989cbf113e3e1d86a2dc5dbf4a1ebf96c1051617a1e82e0c118ddb2-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/501f3dc17989cbf113e3e1d86a2dc5dbf4a1ebf96c1051617a1e82e0c118ddb2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/501f3dc17989cbf113e3e1d86a2dc5dbf4a1ebf96c1051617a1e82e0c118ddb2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/501f3dc17989cbf113e3e1d86a2dc5dbf4a1ebf96c1051617a1e82e0c118ddb2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-807463",
	                "Source": "/var/lib/docker/volumes/ha-807463/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-807463",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-807463",
	                "name.minikube.sigs.k8s.io": "ha-807463",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "519a0261c9568d4a6f9cab4a02626789b917d4097449bf7d122da62e1553ad90",
	            "SandboxKey": "/var/run/docker/netns/519a0261c956",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-807463": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:d7:45:51:f4:8a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3847a657768484ae039efdd09e2b590403676178eb4c67c06a2221fe144c70b7",
	                    "EndpointID": "1be139014228dabc7add444f5a4d8325f46a753a08b0696634c3bb797577acd0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-807463",
	                        "fea8f67be9d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-807463 -n ha-807463
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-807463 logs -n 25: (1.427720529s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-807463 ssh -n ha-807463-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m02 sudo cat /home/docker/cp-test_ha-807463-m03_ha-807463-m02.txt                                         │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp ha-807463-m03:/home/docker/cp-test.txt ha-807463-m04:/home/docker/cp-test_ha-807463-m03_ha-807463-m04.txt               │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test_ha-807463-m03_ha-807463-m04.txt                                         │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp testdata/cp-test.txt ha-807463-m04:/home/docker/cp-test.txt                                                             │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp ha-807463-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1218422779/001/cp-test_ha-807463-m04.txt │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp ha-807463-m04:/home/docker/cp-test.txt ha-807463:/home/docker/cp-test_ha-807463-m04_ha-807463.txt                       │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463 sudo cat /home/docker/cp-test_ha-807463-m04_ha-807463.txt                                                 │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp ha-807463-m04:/home/docker/cp-test.txt ha-807463-m02:/home/docker/cp-test_ha-807463-m04_ha-807463-m02.txt               │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m02 sudo cat /home/docker/cp-test_ha-807463-m04_ha-807463-m02.txt                                         │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ cp      │ ha-807463 cp ha-807463-m04:/home/docker/cp-test.txt ha-807463-m03:/home/docker/cp-test_ha-807463-m04_ha-807463-m03.txt               │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ ssh     │ ha-807463 ssh -n ha-807463-m03 sudo cat /home/docker/cp-test_ha-807463-m04_ha-807463-m03.txt                                         │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ node    │ ha-807463 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ node    │ ha-807463 node start m02 --alsologtostderr -v 5                                                                                      │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:27 UTC │
	│ node    │ ha-807463 node list --alsologtostderr -v 5                                                                                           │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │                     │
	│ stop    │ ha-807463 stop --alsologtostderr -v 5                                                                                                │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │ 09 Oct 25 19:27 UTC │
	│ start   │ ha-807463 start --wait true --alsologtostderr -v 5                                                                                   │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:27 UTC │                     │
	│ node    │ ha-807463 node list --alsologtostderr -v 5                                                                                           │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │                     │
	│ node    │ ha-807463 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-807463 │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:36 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:27:31
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:27:31.218830  343307 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:27:31.218980  343307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:31.218993  343307 out.go:374] Setting ErrFile to fd 2...
	I1009 19:27:31.219013  343307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:27:31.219307  343307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:27:31.219769  343307 out.go:368] Setting JSON to false
	I1009 19:27:31.220680  343307 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7791,"bootTime":1760030261,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 19:27:31.220751  343307 start.go:143] virtualization:  
	I1009 19:27:31.225902  343307 out.go:179] * [ha-807463] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:27:31.229045  343307 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:27:31.229154  343307 notify.go:221] Checking for updates...
	I1009 19:27:31.235436  343307 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:27:31.238296  343307 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:27:31.241057  343307 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 19:27:31.243947  343307 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:27:31.246781  343307 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:27:31.250030  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:31.250184  343307 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:27:31.286472  343307 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:27:31.286604  343307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:31.343705  343307 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-09 19:27:31.334706362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:27:31.343816  343307 docker.go:319] overlay module found
	I1009 19:27:31.346870  343307 out.go:179] * Using the docker driver based on existing profile
	I1009 19:27:31.349767  343307 start.go:309] selected driver: docker
	I1009 19:27:31.349786  343307 start.go:930] validating driver "docker" against &{Name:ha-807463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:31.349926  343307 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:27:31.350028  343307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:27:31.412249  343307 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-09 19:27:31.403030574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:27:31.412653  343307 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:27:31.412689  343307 cni.go:84] Creating CNI manager for ""
	I1009 19:27:31.412755  343307 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1009 19:27:31.412799  343307 start.go:353] cluster config:
	{Name:ha-807463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:31.417709  343307 out.go:179] * Starting "ha-807463" primary control-plane node in "ha-807463" cluster
	I1009 19:27:31.420530  343307 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:31.423466  343307 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:31.426321  343307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:31.426392  343307 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:27:31.426406  343307 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:31.426410  343307 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:31.426490  343307 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:27:31.426508  343307 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:31.426650  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:31.445925  343307 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:31.445951  343307 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:31.445969  343307 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:31.446007  343307 start.go:361] acquireMachinesLock for ha-807463: {Name:mk7b03a6b271157d59e205354be444442bc66672 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:31.446069  343307 start.go:365] duration metric: took 41.674µs to acquireMachinesLock for "ha-807463"
	I1009 19:27:31.446095  343307 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:27:31.446101  343307 fix.go:55] fixHost starting: 
	I1009 19:27:31.446358  343307 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:27:31.463339  343307 fix.go:113] recreateIfNeeded on ha-807463: state=Stopped err=<nil>
	W1009 19:27:31.463369  343307 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:27:31.466724  343307 out.go:252] * Restarting existing docker container for "ha-807463" ...
	I1009 19:27:31.466808  343307 cli_runner.go:164] Run: docker start ha-807463
	I1009 19:27:31.729554  343307 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:27:31.752533  343307 kic.go:430] container "ha-807463" state is running.
	I1009 19:27:31.752940  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463
	I1009 19:27:31.776613  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:31.776858  343307 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:31.776933  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:31.798253  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:31.798586  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1009 19:27:31.798603  343307 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:31.799247  343307 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 19:27:34.945362  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463
	
	I1009 19:27:34.945397  343307 ubuntu.go:182] provisioning hostname "ha-807463"
	I1009 19:27:34.945467  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:34.962891  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:34.963208  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1009 19:27:34.963226  343307 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-807463 && echo "ha-807463" | sudo tee /etc/hostname
	I1009 19:27:35.120375  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463
	
	I1009 19:27:35.120459  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:35.138932  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:35.139244  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1009 19:27:35.139259  343307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-807463' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-807463/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-807463' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:35.285402  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:35.285451  343307 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 19:27:35.285478  343307 ubuntu.go:190] setting up certificates
	I1009 19:27:35.285488  343307 provision.go:84] configureAuth start
	I1009 19:27:35.285558  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463
	I1009 19:27:35.302829  343307 provision.go:143] copyHostCerts
	I1009 19:27:35.302873  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:27:35.302904  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 19:27:35.302917  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:27:35.303005  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 19:27:35.303096  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:27:35.303118  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 19:27:35.303127  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:27:35.303156  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 19:27:35.303204  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:27:35.303225  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 19:27:35.303230  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:27:35.303255  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 19:27:35.303308  343307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.ha-807463 san=[127.0.0.1 192.168.49.2 ha-807463 localhost minikube]
	I1009 19:27:35.901224  343307 provision.go:177] copyRemoteCerts
	I1009 19:27:35.901289  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:35.901355  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:35.918214  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.021624  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:36.021693  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:36.040520  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:36.040583  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:27:36.059254  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:36.059315  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:27:36.078084  343307 provision.go:87] duration metric: took 792.56918ms to configureAuth
	I1009 19:27:36.078112  343307 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:36.078344  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:36.078465  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.095675  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:36.095992  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33181 <nil> <nil>}
	I1009 19:27:36.096012  343307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:36.425006  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:36.425081  343307 machine.go:96] duration metric: took 4.648205511s to provisionDockerMachine
	I1009 19:27:36.425141  343307 start.go:294] postStartSetup for "ha-807463" (driver="docker")
	I1009 19:27:36.425177  343307 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:36.425298  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:36.425384  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.449453  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.553510  343307 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:36.557246  343307 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:36.557278  343307 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:36.557290  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 19:27:36.557367  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 19:27:36.557489  343307 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 19:27:36.557501  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /etc/ssl/certs/2960022.pem
	I1009 19:27:36.557607  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:36.565210  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:27:36.583083  343307 start.go:297] duration metric: took 157.903278ms for postStartSetup
	I1009 19:27:36.583210  343307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:36.583282  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.600612  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.698274  343307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:36.703016  343307 fix.go:57] duration metric: took 5.256907577s for fixHost
	I1009 19:27:36.703042  343307 start.go:84] releasing machines lock for "ha-807463", held for 5.256957103s
	I1009 19:27:36.703115  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463
	I1009 19:27:36.720370  343307 ssh_runner.go:195] Run: cat /version.json
	I1009 19:27:36.720385  343307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:36.720422  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.720451  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:36.743233  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.753326  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:36.948710  343307 ssh_runner.go:195] Run: systemctl --version
	I1009 19:27:36.955436  343307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:36.994992  343307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:37.001157  343307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:37.001242  343307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:37.015899  343307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:27:37.015931  343307 start.go:496] detecting cgroup driver to use...
	I1009 19:27:37.016002  343307 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:27:37.016099  343307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:37.034350  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:37.049609  343307 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:37.049706  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:37.065757  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:37.079370  343307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:37.204726  343307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:37.324926  343307 docker.go:234] disabling docker service ...
	I1009 19:27:37.325051  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:37.340669  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:37.354186  343307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:37.468499  343307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:37.609321  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:37.623308  343307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:37.638872  343307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:37.638957  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.648255  343307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:27:37.648376  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.658302  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.667181  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.675984  343307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:37.685440  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.694680  343307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.702750  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:37.711421  343307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:37.719182  343307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:37.727483  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:37.841375  343307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:37.980708  343307 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:37.980812  343307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:37.984807  343307 start.go:564] Will wait 60s for crictl version
	I1009 19:27:37.984933  343307 ssh_runner.go:195] Run: which crictl
	I1009 19:27:37.988572  343307 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:38.021983  343307 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:38.022073  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:27:38.052703  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:27:38.085238  343307 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:38.088088  343307 cli_runner.go:164] Run: docker network inspect ha-807463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:38.104470  343307 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:38.108353  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:38.118588  343307 kubeadm.go:883] updating cluster {Name:ha-807463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:27:38.118741  343307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:38.118810  343307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:38.155316  343307 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:38.155341  343307 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:27:38.155400  343307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:27:38.184223  343307 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:27:38.184246  343307 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:27:38.184257  343307 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:27:38.184370  343307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-807463 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:38.184448  343307 ssh_runner.go:195] Run: crio config
	I1009 19:27:38.252414  343307 cni.go:84] Creating CNI manager for ""
	I1009 19:27:38.252436  343307 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1009 19:27:38.252454  343307 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:27:38.252488  343307 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-807463 NodeName:ha-807463 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:27:38.252634  343307 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-807463"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:27:38.252656  343307 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:38.252721  343307 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:38.265014  343307 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:38.265147  343307 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:38.265209  343307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:38.272978  343307 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:38.273096  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:27:38.280861  343307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:27:38.294726  343307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:38.307657  343307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1009 19:27:38.320684  343307 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1009 19:27:38.333393  343307 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:38.337014  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:38.346725  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:38.455808  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:38.472442  343307 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463 for IP: 192.168.49.2
	I1009 19:27:38.472472  343307 certs.go:195] generating shared ca certs ...
	I1009 19:27:38.472489  343307 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:38.472635  343307 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 19:27:38.472702  343307 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 19:27:38.472715  343307 certs.go:257] generating profile certs ...
	I1009 19:27:38.472790  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key
	I1009 19:27:38.472829  343307 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.2f140c92
	I1009 19:27:38.472846  343307 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt.2f140c92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1009 19:27:38.846814  343307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt.2f140c92 ...
	I1009 19:27:38.846850  343307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt.2f140c92: {Name:mkc2191acbc8bdf29d69f0113598f387f3156525 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:38.847045  343307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.2f140c92 ...
	I1009 19:27:38.847059  343307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.2f140c92: {Name:mk4420d6a062c4dab2900704e5add4b492d36555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:38.847148  343307 certs.go:382] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt.2f140c92 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt
	I1009 19:27:38.847292  343307 certs.go:386] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.2f140c92 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key
	I1009 19:27:38.847425  343307 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key
	I1009 19:27:38.847442  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:38.847458  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:38.847476  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:38.847488  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:38.847504  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:38.847525  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:38.847541  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:38.847559  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:38.847611  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 19:27:38.847645  343307 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:38.847656  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:27:38.847681  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:38.847709  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:38.847733  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 19:27:38.847781  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:27:38.847811  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:38.847826  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem -> /usr/share/ca-certificates/296002.pem
	I1009 19:27:38.847838  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /usr/share/ca-certificates/2960022.pem
	I1009 19:27:38.848384  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:38.867598  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:38.888313  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:38.908288  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:27:38.929572  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 19:27:38.949045  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:27:38.966969  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:38.986319  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:27:39.012715  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:39.032678  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 19:27:39.051431  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 19:27:39.069614  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:27:39.090445  343307 ssh_runner.go:195] Run: openssl version
	I1009 19:27:39.098940  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:39.108430  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:39.119839  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:39.119907  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:39.188461  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:39.197309  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 19:27:39.212076  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 19:27:39.218737  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 19:27:39.218850  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 19:27:39.320003  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:39.338511  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 19:27:39.353078  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 19:27:39.358619  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 19:27:39.358736  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 19:27:39.417831  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:39.430407  343307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:39.437508  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:27:39.502060  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:27:39.549190  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:27:39.599910  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:27:39.657699  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:27:39.729015  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:27:39.791014  343307 kubeadm.go:400] StartCluster: {Name:ha-807463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:27:39.791208  343307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:27:39.791318  343307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:27:39.827907  343307 cri.go:89] found id: "9d475a483e7023b214d8a1506f2ba793d2cb34e4e0e7b5f0fc49d91b875116f7"
	I1009 19:27:39.827980  343307 cri.go:89] found id: "eb3eb3edb2fff30f90b98210a15c7960a0d8f4700c380a4bc2a236e3530d4043"
	I1009 19:27:39.828002  343307 cri.go:89] found id: "e4593fb70e6dd0047bc83f89897d4c1ad23896e5ca9a3628c4bbeea360f8cbaf"
	I1009 19:27:39.828027  343307 cri.go:89] found id: "60abd5bf9ea13b7e15b4cb133643cb620ae0f536d45d6ac30703be2e3ef7a45f"
	I1009 19:27:39.828064  343307 cri.go:89] found id: "4477522bd8536fe09afcc2397cd8beb927ccd19a6714098fb7bb1f3ef47595ea"
	I1009 19:27:39.828090  343307 cri.go:89] found id: ""
	I1009 19:27:39.828175  343307 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 19:27:39.846495  343307 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:27:39Z" level=error msg="open /run/runc: no such file or directory"
	I1009 19:27:39.846575  343307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:27:39.873447  343307 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:27:39.873525  343307 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:27:39.873618  343307 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:27:39.890893  343307 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:39.891370  343307 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-807463" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:27:39.891541  343307 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-294150/kubeconfig needs updating (will repair): [kubeconfig missing "ha-807463" cluster setting kubeconfig missing "ha-807463" context setting]
	I1009 19:27:39.891898  343307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:39.892555  343307 kapi.go:59] client config for ha-807463: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key", CAFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:27:39.893429  343307 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:27:39.893485  343307 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:27:39.893506  343307 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:27:39.893530  343307 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:27:39.893571  343307 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:27:39.894036  343307 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:27:39.894259  343307 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:27:39.909848  343307 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:27:39.909926  343307 kubeadm.go:601] duration metric: took 36.380579ms to restartPrimaryControlPlane
	I1009 19:27:39.909962  343307 kubeadm.go:402] duration metric: took 118.974675ms to StartCluster
	I1009 19:27:39.909997  343307 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:39.910102  343307 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:27:39.910819  343307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:39.911409  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:39.911493  343307 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:39.911613  343307 start.go:242] waiting for startup goroutines ...
	I1009 19:27:39.911544  343307 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:27:39.917562  343307 out.go:179] * Enabled addons: 
	I1009 19:27:39.920371  343307 addons.go:514] duration metric: took 8.815745ms for enable addons: enabled=[]
	I1009 19:27:39.920465  343307 start.go:247] waiting for cluster config update ...
	I1009 19:27:39.920489  343307 start.go:256] writing updated cluster config ...
	I1009 19:27:39.924923  343307 out.go:203] 
	I1009 19:27:39.928045  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:39.928167  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:39.931505  343307 out.go:179] * Starting "ha-807463-m02" control-plane node in "ha-807463" cluster
	I1009 19:27:39.934402  343307 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:27:39.937316  343307 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:27:39.940080  343307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:27:39.940107  343307 cache.go:58] Caching tarball of preloaded images
	I1009 19:27:39.940210  343307 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:27:39.940220  343307 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:27:39.940348  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:39.940566  343307 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:27:39.975622  343307 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:27:39.975643  343307 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:27:39.975657  343307 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:27:39.975682  343307 start.go:361] acquireMachinesLock for ha-807463-m02: {Name:mk6ba8ff733306501b688f1b4a216ac9e405e90f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:27:39.975736  343307 start.go:365] duration metric: took 39.187µs to acquireMachinesLock for "ha-807463-m02"
	I1009 19:27:39.975756  343307 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:27:39.975761  343307 fix.go:55] fixHost starting: m02
	I1009 19:27:39.976050  343307 cli_runner.go:164] Run: docker container inspect ha-807463-m02 --format={{.State.Status}}
	I1009 19:27:40.012164  343307 fix.go:113] recreateIfNeeded on ha-807463-m02: state=Stopped err=<nil>
	W1009 19:27:40.012195  343307 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:27:40.015441  343307 out.go:252] * Restarting existing docker container for "ha-807463-m02" ...
	I1009 19:27:40.015539  343307 cli_runner.go:164] Run: docker start ha-807463-m02
	I1009 19:27:40.410002  343307 cli_runner.go:164] Run: docker container inspect ha-807463-m02 --format={{.State.Status}}
	I1009 19:27:40.445455  343307 kic.go:430] container "ha-807463-m02" state is running.
	I1009 19:27:40.445851  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m02
	I1009 19:27:40.474228  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:27:40.474476  343307 machine.go:93] provisionDockerMachine start ...
	I1009 19:27:40.474538  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:40.505891  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:40.506192  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1009 19:27:40.506201  343307 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:27:40.506929  343307 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41996->127.0.0.1:33186: read: connection reset by peer
	I1009 19:27:43.729947  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463-m02
	
	I1009 19:27:43.729974  343307 ubuntu.go:182] provisioning hostname "ha-807463-m02"
	I1009 19:27:43.730046  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:43.750597  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:43.750914  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1009 19:27:43.750934  343307 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-807463-m02 && echo "ha-807463-m02" | sudo tee /etc/hostname
	I1009 19:27:44.042915  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463-m02
	
	I1009 19:27:44.043000  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:44.070967  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:44.071275  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1009 19:27:44.071306  343307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-807463-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-807463-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-807463-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:27:44.341979  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:27:44.342008  343307 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 19:27:44.342024  343307 ubuntu.go:190] setting up certificates
	I1009 19:27:44.342039  343307 provision.go:84] configureAuth start
	I1009 19:27:44.342104  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m02
	I1009 19:27:44.370782  343307 provision.go:143] copyHostCerts
	I1009 19:27:44.370832  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:27:44.370866  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 19:27:44.370878  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:27:44.370961  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 19:27:44.371063  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:27:44.371087  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 19:27:44.371095  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:27:44.371128  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 19:27:44.371178  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:27:44.371200  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 19:27:44.371210  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:27:44.371237  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 19:27:44.371335  343307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.ha-807463-m02 san=[127.0.0.1 192.168.49.3 ha-807463-m02 localhost minikube]
	I1009 19:27:45.671497  343307 provision.go:177] copyRemoteCerts
	I1009 19:27:45.671655  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:27:45.671727  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:45.689990  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:45.879571  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:27:45.879633  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:27:45.934252  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:27:45.934317  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:27:46.015412  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:27:46.015492  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:27:46.095867  343307 provision.go:87] duration metric: took 1.753810196s to configureAuth
	I1009 19:27:46.095898  343307 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:27:46.096158  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:46.096279  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:46.134871  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:27:46.135193  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1009 19:27:46.135215  343307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:27:47.743001  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:27:47.743025  343307 machine.go:96] duration metric: took 7.268539709s to provisionDockerMachine
	I1009 19:27:47.743037  343307 start.go:294] postStartSetup for "ha-807463-m02" (driver="docker")
	I1009 19:27:47.743048  343307 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:27:47.743114  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:27:47.743178  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:47.763602  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:47.878489  343307 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:27:47.882311  343307 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:27:47.882390  343307 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:27:47.882425  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 19:27:47.882513  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 19:27:47.882649  343307 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 19:27:47.882678  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /etc/ssl/certs/2960022.pem
	I1009 19:27:47.882829  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:27:47.895445  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:27:47.923753  343307 start.go:297] duration metric: took 180.689414ms for postStartSetup
	I1009 19:27:47.923906  343307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:27:47.923987  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:47.943574  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:48.072414  343307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:27:48.090538  343307 fix.go:57] duration metric: took 8.114767256s for fixHost
	I1009 19:27:48.090623  343307 start.go:84] releasing machines lock for "ha-807463-m02", held for 8.114877188s
	I1009 19:27:48.090728  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m02
	I1009 19:27:48.124084  343307 out.go:179] * Found network options:
	I1009 19:27:48.127431  343307 out.go:179]   - NO_PROXY=192.168.49.2
	W1009 19:27:48.131026  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	W1009 19:27:48.131071  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	I1009 19:27:48.131145  343307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:27:48.131185  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:48.131442  343307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:27:48.131511  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m02
	I1009 19:27:48.169238  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:48.169825  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m02/id_rsa Username:docker}
	I1009 19:27:48.682814  343307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:27:48.688162  343307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:27:48.688239  343307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:27:48.699171  343307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:27:48.699193  343307 start.go:496] detecting cgroup driver to use...
	I1009 19:27:48.699225  343307 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:27:48.699282  343307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:27:48.728026  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:27:48.752647  343307 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:27:48.752765  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:27:48.774861  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:27:48.799117  343307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:27:49.042961  343307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:27:49.283614  343307 docker.go:234] disabling docker service ...
	I1009 19:27:49.283734  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:27:49.307987  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:27:49.328204  343307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:27:49.580623  343307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:27:49.895453  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:27:49.919339  343307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:27:49.947539  343307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:27:49.947656  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.962511  343307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:27:49.962650  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.979924  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:49.995805  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:50.007931  343307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:27:50.028218  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:50.068031  343307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:50.096196  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:27:50.122544  343307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:27:50.151110  343307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:27:50.173303  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:50.489690  343307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:27:50.773593  343307 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:27:50.773686  343307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:27:50.777653  343307 start.go:564] Will wait 60s for crictl version
	I1009 19:27:50.777737  343307 ssh_runner.go:195] Run: which crictl
	I1009 19:27:50.781240  343307 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:27:50.810791  343307 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:27:50.810938  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:27:50.840800  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:27:50.876670  343307 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:27:50.879670  343307 out.go:179]   - env NO_PROXY=192.168.49.2
	I1009 19:27:50.882673  343307 cli_runner.go:164] Run: docker network inspect ha-807463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:27:50.898864  343307 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:27:50.902801  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:50.912892  343307 mustload.go:65] Loading cluster: ha-807463
	I1009 19:27:50.913185  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:50.913459  343307 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:27:50.931384  343307 host.go:66] Checking if "ha-807463" exists ...
	I1009 19:27:50.931675  343307 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463 for IP: 192.168.49.3
	I1009 19:27:50.931689  343307 certs.go:195] generating shared ca certs ...
	I1009 19:27:50.931705  343307 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:27:50.931837  343307 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 19:27:50.931898  343307 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 19:27:50.931911  343307 certs.go:257] generating profile certs ...
	I1009 19:27:50.931992  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key
	I1009 19:27:50.932059  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.0cec3fb8
	I1009 19:27:50.932139  343307 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key
	I1009 19:27:50.932153  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:27:50.932166  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:27:50.932181  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:27:50.932192  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:27:50.932209  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:27:50.932226  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:27:50.932242  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:27:50.932253  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:27:50.932306  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 19:27:50.932342  343307 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 19:27:50.932355  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:27:50.932378  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:27:50.932408  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:27:50.932435  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 19:27:50.932481  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:27:50.932513  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem -> /usr/share/ca-certificates/296002.pem
	I1009 19:27:50.932528  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /usr/share/ca-certificates/2960022.pem
	I1009 19:27:50.932539  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:50.932602  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:27:50.949747  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:27:51.053408  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1009 19:27:51.057364  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1009 19:27:51.066242  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1009 19:27:51.070160  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1009 19:27:51.082531  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1009 19:27:51.086523  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1009 19:27:51.095670  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1009 19:27:51.099538  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1009 19:27:51.108444  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1009 19:27:51.112383  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1009 19:27:51.121230  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1009 19:27:51.126634  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1009 19:27:51.135934  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:27:51.157827  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:27:51.177909  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:27:51.208380  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:27:51.233729  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 19:27:51.254881  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:27:51.273448  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:27:51.293146  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:27:51.312924  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 19:27:51.335482  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 19:27:51.355302  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:27:51.375754  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1009 19:27:51.391115  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1009 19:27:51.404527  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1009 19:27:51.418174  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1009 19:27:51.431794  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1009 19:27:51.445219  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1009 19:27:51.460138  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1009 19:27:51.473336  343307 ssh_runner.go:195] Run: openssl version
	I1009 19:27:51.480063  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 19:27:51.488916  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 19:27:51.493541  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 19:27:51.493662  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 19:27:51.535043  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 19:27:51.543247  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 19:27:51.552252  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 19:27:51.556439  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 19:27:51.556553  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 19:27:51.598587  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:27:51.607271  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:27:51.616125  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:51.620083  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:51.620175  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:27:51.664070  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:27:51.672785  343307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:27:51.676884  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:27:51.718930  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:27:51.761150  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:27:51.802284  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:27:51.843422  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:27:51.890388  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:27:51.931465  343307 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1009 19:27:51.931643  343307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-807463-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:27:51.931677  343307 kube-vip.go:115] generating kube-vip config ...
	I1009 19:27:51.931730  343307 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:27:51.945085  343307 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:27:51.945174  343307 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:27:51.945236  343307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:27:51.955208  343307 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:27:51.955321  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1009 19:27:51.963468  343307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 19:27:51.977048  343307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:27:51.990708  343307 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1009 19:27:52.008521  343307 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:27:52.012741  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:27:52.024091  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:52.162593  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:52.176738  343307 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:27:52.177297  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:27:52.180462  343307 out.go:179] * Verifying Kubernetes components...
	I1009 19:27:52.183354  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:27:52.328633  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:27:52.343053  343307 kapi.go:59] client config for ha-807463: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key", CAFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1009 19:27:52.343132  343307 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1009 19:27:52.343378  343307 node_ready.go:35] waiting up to 6m0s for node "ha-807463-m02" to be "Ready" ...
	I1009 19:28:12.417047  343307 node_ready.go:49] node "ha-807463-m02" is "Ready"
	I1009 19:28:12.417075  343307 node_ready.go:38] duration metric: took 20.07367073s for node "ha-807463-m02" to be "Ready" ...
	I1009 19:28:12.417087  343307 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:28:12.417171  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:12.917913  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:13.418163  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:13.917283  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:14.417776  343307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:14.441559  343307 api_server.go:72] duration metric: took 22.264725667s to wait for apiserver process to appear ...
	I1009 19:28:14.441582  343307 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:28:14.441601  343307 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1009 19:28:14.457402  343307 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1009 19:28:14.458648  343307 api_server.go:141] control plane version: v1.34.1
	I1009 19:28:14.458703  343307 api_server.go:131] duration metric: took 17.113274ms to wait for apiserver health ...
	I1009 19:28:14.458728  343307 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:28:14.470395  343307 system_pods.go:59] 26 kube-system pods found
	I1009 19:28:14.470439  343307 system_pods.go:61] "coredns-66bc5c9577-tswbs" [5837c6fe-278a-4b3a-98d1-79992fe9ea08] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:28:14.470449  343307 system_pods.go:61] "coredns-66bc5c9577-vkzgf" [80c50dd0-6a2c-4662-80d3-72f45754c3df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:28:14.470454  343307 system_pods.go:61] "etcd-ha-807463" [84964141-cf31-4652-9a3c-a9265edf4f8d] Running
	I1009 19:28:14.470459  343307 system_pods.go:61] "etcd-ha-807463-m02" [e91cfd04-5988-45ce-9dae-b204db6efe4e] Running
	I1009 19:28:14.470464  343307 system_pods.go:61] "etcd-ha-807463-m03" [26cd4bca-fd69-452f-b5a2-b9bbc5966ded] Running
	I1009 19:28:14.470473  343307 system_pods.go:61] "kindnet-bc8tf" [f003f127-5e25-434a-837b-d021fb0e3fa7] Running
	I1009 19:28:14.470477  343307 system_pods.go:61] "kindnet-dvwc7" [2a7512ff-e63c-4aa0-8b4e-fb241415067f] Running
	I1009 19:28:14.470483  343307 system_pods.go:61] "kindnet-gvpmq" [223d0c34-5384-4cd5-a0d2-842a422629ab] Running
	I1009 19:28:14.470488  343307 system_pods.go:61] "kindnet-rc46j" [22f58fe4-1d11-4259-b9f9-e8740b8b2257] Running
	I1009 19:28:14.470501  343307 system_pods.go:61] "kube-apiserver-ha-807463" [f6f353e4-8237-46db-a4a8-cd536448a79b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:28:14.470507  343307 system_pods.go:61] "kube-apiserver-ha-807463-m02" [3d8c0d4b-2cfb-4de6-8d9f-95e25e6f2a4e] Running
	I1009 19:28:14.470517  343307 system_pods.go:61] "kube-apiserver-ha-807463-m03" [a7b828f8-ab95-440a-b42e-e48d83bf3d20] Running
	I1009 19:28:14.470521  343307 system_pods.go:61] "kube-controller-manager-ha-807463" [e409b5f4-e73e-4270-bc1b-44b9a84123c7] Running
	I1009 19:28:14.470527  343307 system_pods.go:61] "kube-controller-manager-ha-807463-m02" [bce8c53d-0ba9-4e5f-93ca-06958824d9ba] Running
	I1009 19:28:14.470538  343307 system_pods.go:61] "kube-controller-manager-ha-807463-m03" [96d81c2f-668e-4729-aa2c-ab008af31ef1] Running
	I1009 19:28:14.470542  343307 system_pods.go:61] "kube-proxy-2lp2p" [cb605c64-8004-4f40-8e70-eb8e3184d3d6] Running
	I1009 19:28:14.470546  343307 system_pods.go:61] "kube-proxy-7lpbk" [d6ba71bf-d06d-4ade-b0e4-85303842110c] Running
	I1009 19:28:14.470550  343307 system_pods.go:61] "kube-proxy-b84dn" [9c10ee5e-8408-4b6f-985a-8d4f44a869cc] Running
	I1009 19:28:14.470555  343307 system_pods.go:61] "kube-proxy-vw7c5" [89df419c-841c-4a9c-af83-50e98327318d] Running
	I1009 19:28:14.470561  343307 system_pods.go:61] "kube-scheduler-ha-807463" [d577e200-00d6-4bac-aa67-0f7ef54c4d1a] Running
	I1009 19:28:14.470568  343307 system_pods.go:61] "kube-scheduler-ha-807463-m02" [848b94f3-79dc-44dc-8416-33c96451e0c0] Running
	I1009 19:28:14.470572  343307 system_pods.go:61] "kube-scheduler-ha-807463-m03" [f7153dac-0ede-40dc-b18c-1c03bebc8414] Running
	I1009 19:28:14.470578  343307 system_pods.go:61] "kube-vip-ha-807463" [f4f09ea9-0059-4cc4-9c0b-0ea2240a1885] Running
	I1009 19:28:14.470583  343307 system_pods.go:61] "kube-vip-ha-807463-m02" [98f28358-d9e9-4f8a-b407-b14baa34ea75] Running
	I1009 19:28:14.470589  343307 system_pods.go:61] "kube-vip-ha-807463-m03" [c150d4cd-1c28-4677-9a55-6e2d119daa81] Running
	I1009 19:28:14.470594  343307 system_pods.go:61] "storage-provisioner" [b9e8a81e-2bee-4542-b231-7490dfbf6065] Running
	I1009 19:28:14.470599  343307 system_pods.go:74] duration metric: took 11.85336ms to wait for pod list to return data ...
	I1009 19:28:14.470612  343307 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:28:14.482492  343307 default_sa.go:45] found service account: "default"
	I1009 19:28:14.482522  343307 default_sa.go:55] duration metric: took 11.902296ms for default service account to be created ...
	I1009 19:28:14.482532  343307 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:28:14.496415  343307 system_pods.go:86] 26 kube-system pods found
	I1009 19:28:14.496458  343307 system_pods.go:89] "coredns-66bc5c9577-tswbs" [5837c6fe-278a-4b3a-98d1-79992fe9ea08] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:28:14.496468  343307 system_pods.go:89] "coredns-66bc5c9577-vkzgf" [80c50dd0-6a2c-4662-80d3-72f45754c3df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:28:14.496475  343307 system_pods.go:89] "etcd-ha-807463" [84964141-cf31-4652-9a3c-a9265edf4f8d] Running
	I1009 19:28:14.496480  343307 system_pods.go:89] "etcd-ha-807463-m02" [e91cfd04-5988-45ce-9dae-b204db6efe4e] Running
	I1009 19:28:14.496484  343307 system_pods.go:89] "etcd-ha-807463-m03" [26cd4bca-fd69-452f-b5a2-b9bbc5966ded] Running
	I1009 19:28:14.496488  343307 system_pods.go:89] "kindnet-bc8tf" [f003f127-5e25-434a-837b-d021fb0e3fa7] Running
	I1009 19:28:14.496493  343307 system_pods.go:89] "kindnet-dvwc7" [2a7512ff-e63c-4aa0-8b4e-fb241415067f] Running
	I1009 19:28:14.496502  343307 system_pods.go:89] "kindnet-gvpmq" [223d0c34-5384-4cd5-a0d2-842a422629ab] Running
	I1009 19:28:14.496509  343307 system_pods.go:89] "kindnet-rc46j" [22f58fe4-1d11-4259-b9f9-e8740b8b2257] Running
	I1009 19:28:14.496517  343307 system_pods.go:89] "kube-apiserver-ha-807463" [f6f353e4-8237-46db-a4a8-cd536448a79b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:28:14.496523  343307 system_pods.go:89] "kube-apiserver-ha-807463-m02" [3d8c0d4b-2cfb-4de6-8d9f-95e25e6f2a4e] Running
	I1009 19:28:14.496534  343307 system_pods.go:89] "kube-apiserver-ha-807463-m03" [a7b828f8-ab95-440a-b42e-e48d83bf3d20] Running
	I1009 19:28:14.496539  343307 system_pods.go:89] "kube-controller-manager-ha-807463" [e409b5f4-e73e-4270-bc1b-44b9a84123c7] Running
	I1009 19:28:14.496544  343307 system_pods.go:89] "kube-controller-manager-ha-807463-m02" [bce8c53d-0ba9-4e5f-93ca-06958824d9ba] Running
	I1009 19:28:14.496553  343307 system_pods.go:89] "kube-controller-manager-ha-807463-m03" [96d81c2f-668e-4729-aa2c-ab008af31ef1] Running
	I1009 19:28:14.496557  343307 system_pods.go:89] "kube-proxy-2lp2p" [cb605c64-8004-4f40-8e70-eb8e3184d3d6] Running
	I1009 19:28:14.496561  343307 system_pods.go:89] "kube-proxy-7lpbk" [d6ba71bf-d06d-4ade-b0e4-85303842110c] Running
	I1009 19:28:14.496566  343307 system_pods.go:89] "kube-proxy-b84dn" [9c10ee5e-8408-4b6f-985a-8d4f44a869cc] Running
	I1009 19:28:14.496575  343307 system_pods.go:89] "kube-proxy-vw7c5" [89df419c-841c-4a9c-af83-50e98327318d] Running
	I1009 19:28:14.496579  343307 system_pods.go:89] "kube-scheduler-ha-807463" [d577e200-00d6-4bac-aa67-0f7ef54c4d1a] Running
	I1009 19:28:14.496583  343307 system_pods.go:89] "kube-scheduler-ha-807463-m02" [848b94f3-79dc-44dc-8416-33c96451e0c0] Running
	I1009 19:28:14.496587  343307 system_pods.go:89] "kube-scheduler-ha-807463-m03" [f7153dac-0ede-40dc-b18c-1c03bebc8414] Running
	I1009 19:28:14.496591  343307 system_pods.go:89] "kube-vip-ha-807463" [f4f09ea9-0059-4cc4-9c0b-0ea2240a1885] Running
	I1009 19:28:14.496597  343307 system_pods.go:89] "kube-vip-ha-807463-m02" [98f28358-d9e9-4f8a-b407-b14baa34ea75] Running
	I1009 19:28:14.496601  343307 system_pods.go:89] "kube-vip-ha-807463-m03" [c150d4cd-1c28-4677-9a55-6e2d119daa81] Running
	I1009 19:28:14.496609  343307 system_pods.go:89] "storage-provisioner" [b9e8a81e-2bee-4542-b231-7490dfbf6065] Running
	I1009 19:28:14.496616  343307 system_pods.go:126] duration metric: took 14.078508ms to wait for k8s-apps to be running ...
	I1009 19:28:14.496627  343307 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:28:14.496696  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:28:14.527254  343307 system_svc.go:56] duration metric: took 30.616666ms WaitForService to wait for kubelet
	I1009 19:28:14.527281  343307 kubeadm.go:586] duration metric: took 22.350452667s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:28:14.527300  343307 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:28:14.536047  343307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:28:14.536130  343307 node_conditions.go:123] node cpu capacity is 2
	I1009 19:28:14.536159  343307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:28:14.536184  343307 node_conditions.go:123] node cpu capacity is 2
	I1009 19:28:14.536225  343307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:28:14.536247  343307 node_conditions.go:123] node cpu capacity is 2
	I1009 19:28:14.536284  343307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 19:28:14.536308  343307 node_conditions.go:123] node cpu capacity is 2
	I1009 19:28:14.536330  343307 node_conditions.go:105] duration metric: took 9.020752ms to run NodePressure ...
	I1009 19:28:14.536373  343307 start.go:242] waiting for startup goroutines ...
	I1009 19:28:14.536414  343307 start.go:256] writing updated cluster config ...
	I1009 19:28:14.540247  343307 out.go:203] 
	I1009 19:28:14.543487  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:28:14.543686  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:28:14.547047  343307 out.go:179] * Starting "ha-807463-m03" control-plane node in "ha-807463" cluster
	I1009 19:28:14.550723  343307 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:28:14.553769  343307 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:28:14.556767  343307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:28:14.556832  343307 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:28:14.557073  343307 cache.go:58] Caching tarball of preloaded images
	I1009 19:28:14.557216  343307 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 19:28:14.557276  343307 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:28:14.557431  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:28:14.597092  343307 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:28:14.597123  343307 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:28:14.597144  343307 cache.go:232] Successfully downloaded all kic artifacts
	I1009 19:28:14.597168  343307 start.go:361] acquireMachinesLock for ha-807463-m03: {Name:mk0e43107ec0c9bc8c06da921397f514d91f61d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:28:14.597229  343307 start.go:365] duration metric: took 46.457µs to acquireMachinesLock for "ha-807463-m03"
	I1009 19:28:14.597250  343307 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:28:14.597255  343307 fix.go:55] fixHost starting: m03
	I1009 19:28:14.597512  343307 cli_runner.go:164] Run: docker container inspect ha-807463-m03 --format={{.State.Status}}
	I1009 19:28:14.632017  343307 fix.go:113] recreateIfNeeded on ha-807463-m03: state=Stopped err=<nil>
	W1009 19:28:14.632042  343307 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:28:14.635426  343307 out.go:252] * Restarting existing docker container for "ha-807463-m03" ...
	I1009 19:28:14.635514  343307 cli_runner.go:164] Run: docker start ha-807463-m03
	I1009 19:28:15.014352  343307 cli_runner.go:164] Run: docker container inspect ha-807463-m03 --format={{.State.Status}}
	I1009 19:28:15.044342  343307 kic.go:430] container "ha-807463-m03" state is running.
	I1009 19:28:15.044802  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m03
	I1009 19:28:15.084035  343307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/config.json ...
	I1009 19:28:15.084294  343307 machine.go:93] provisionDockerMachine start ...
	I1009 19:28:15.084356  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:15.113499  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:28:15.113819  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33191 <nil> <nil>}
	I1009 19:28:15.113829  343307 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:28:15.114606  343307 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 19:28:18.387326  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463-m03
	
	I1009 19:28:18.387353  343307 ubuntu.go:182] provisioning hostname "ha-807463-m03"
	I1009 19:28:18.387421  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:18.414941  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:28:18.415269  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33191 <nil> <nil>}
	I1009 19:28:18.415288  343307 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-807463-m03 && echo "ha-807463-m03" | sudo tee /etc/hostname
	I1009 19:28:18.857505  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-807463-m03
	
	I1009 19:28:18.857586  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:18.886274  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:28:18.886587  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33191 <nil> <nil>}
	I1009 19:28:18.886603  343307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-807463-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-807463-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-807463-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:28:19.124493  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:28:19.124522  343307 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 19:28:19.124543  343307 ubuntu.go:190] setting up certificates
	I1009 19:28:19.124552  343307 provision.go:84] configureAuth start
	I1009 19:28:19.124639  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m03
	I1009 19:28:19.150744  343307 provision.go:143] copyHostCerts
	I1009 19:28:19.150791  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:28:19.150823  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 19:28:19.150839  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 19:28:19.150921  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 19:28:19.151006  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:28:19.151029  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 19:28:19.151037  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 19:28:19.151079  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 19:28:19.151132  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:28:19.151154  343307 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 19:28:19.151159  343307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 19:28:19.151184  343307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 19:28:19.151236  343307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.ha-807463-m03 san=[127.0.0.1 192.168.49.4 ha-807463-m03 localhost minikube]
	I1009 19:28:20.594319  343307 provision.go:177] copyRemoteCerts
	I1009 19:28:20.594391  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:28:20.594445  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:20.617127  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:20.793603  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:28:20.793667  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:28:20.838358  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:28:20.838425  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:28:20.897009  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:28:20.897076  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:28:20.947823  343307 provision.go:87] duration metric: took 1.823247487s to configureAuth
	I1009 19:28:20.947854  343307 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:28:20.948102  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:28:20.948220  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:20.980853  343307 main.go:141] libmachine: Using SSH client type: native
	I1009 19:28:20.981192  343307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33191 <nil> <nil>}
	I1009 19:28:20.981215  343307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:28:21.547892  343307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:28:21.547940  343307 machine.go:96] duration metric: took 6.463636002s to provisionDockerMachine
	I1009 19:28:21.547953  343307 start.go:294] postStartSetup for "ha-807463-m03" (driver="docker")
	I1009 19:28:21.547963  343307 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:28:21.548058  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:28:21.548103  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:21.574619  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:21.688699  343307 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:28:21.693344  343307 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:28:21.693371  343307 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:28:21.693382  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 19:28:21.693440  343307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 19:28:21.693513  343307 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 19:28:21.693520  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /etc/ssl/certs/2960022.pem
	I1009 19:28:21.693621  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:28:21.703022  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:28:21.726060  343307 start.go:297] duration metric: took 178.090392ms for postStartSetup
	I1009 19:28:21.726183  343307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:28:21.726252  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:21.754232  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:21.887060  343307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:28:21.902692  343307 fix.go:57] duration metric: took 7.305428838s for fixHost
	I1009 19:28:21.902721  343307 start.go:84] releasing machines lock for "ha-807463-m03", held for 7.305481549s
	I1009 19:28:21.902791  343307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m03
	I1009 19:28:21.935444  343307 out.go:179] * Found network options:
	I1009 19:28:21.938464  343307 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1009 19:28:21.941326  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	W1009 19:28:21.941366  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	W1009 19:28:21.941390  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	W1009 19:28:21.941399  343307 proxy.go:120] fail to check proxy env: Error ip not in block
	I1009 19:28:21.941489  343307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:28:21.941533  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:21.941553  343307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:28:21.941612  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:28:21.971654  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:21.991268  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:28:22.521550  343307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:28:22.531247  343307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:28:22.531361  343307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:28:22.554768  343307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:28:22.554843  343307 start.go:496] detecting cgroup driver to use...
	I1009 19:28:22.554892  343307 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:28:22.554962  343307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:28:22.583220  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:28:22.599310  343307 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:28:22.599403  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:28:22.632291  343307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:28:22.653641  343307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:28:23.037548  343307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:28:23.288869  343307 docker.go:234] disabling docker service ...
	I1009 19:28:23.288983  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:28:23.316355  343307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:28:23.341879  343307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:28:23.636459  343307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:28:23.958882  343307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:28:24.002025  343307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:28:24.060081  343307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:28:24.060153  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.094554  343307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:28:24.094632  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.113879  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.124444  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.134135  343307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:28:24.153071  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.164683  343307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.175420  343307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:28:24.185724  343307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:28:24.196010  343307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:28:24.206389  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:28:24.403396  343307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:29:54.625257  343307 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.2217701s)
	I1009 19:29:54.625289  343307 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:29:54.625347  343307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:29:54.629422  343307 start.go:564] Will wait 60s for crictl version
	I1009 19:29:54.629487  343307 ssh_runner.go:195] Run: which crictl
	I1009 19:29:54.633348  343307 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:29:54.664178  343307 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:29:54.664263  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:29:54.695047  343307 ssh_runner.go:195] Run: crio --version
	I1009 19:29:54.726968  343307 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:29:54.729882  343307 out.go:179]   - env NO_PROXY=192.168.49.2
	I1009 19:29:54.732783  343307 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1009 19:29:54.735745  343307 cli_runner.go:164] Run: docker network inspect ha-807463 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:29:54.754488  343307 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:29:54.758549  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:29:54.769025  343307 mustload.go:65] Loading cluster: ha-807463
	I1009 19:29:54.769312  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:29:54.769581  343307 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:29:54.789308  343307 host.go:66] Checking if "ha-807463" exists ...
	I1009 19:29:54.789631  343307 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463 for IP: 192.168.49.4
	I1009 19:29:54.789648  343307 certs.go:195] generating shared ca certs ...
	I1009 19:29:54.789665  343307 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:29:54.789790  343307 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 19:29:54.789840  343307 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 19:29:54.789852  343307 certs.go:257] generating profile certs ...
	I1009 19:29:54.789935  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key
	I1009 19:29:54.790005  343307 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key.8f59bad3
	I1009 19:29:54.790050  343307 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key
	I1009 19:29:54.790063  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:29:54.790075  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:29:54.790096  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:29:54.790112  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:29:54.790124  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:29:54.790141  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:29:54.790152  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:29:54.790162  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:29:54.790217  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 19:29:54.790247  343307 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 19:29:54.790255  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:29:54.790279  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:29:54.790304  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:29:54.790325  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 19:29:54.790366  343307 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 19:29:54.790392  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> /usr/share/ca-certificates/2960022.pem
	I1009 19:29:54.790404  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:29:54.790415  343307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem -> /usr/share/ca-certificates/296002.pem
	I1009 19:29:54.790566  343307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:29:54.807723  343307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:29:54.905478  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1009 19:29:54.915115  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1009 19:29:54.924123  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1009 19:29:54.927867  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1009 19:29:54.936366  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1009 19:29:54.940038  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1009 19:29:54.948153  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1009 19:29:54.952558  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1009 19:29:54.962178  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1009 19:29:54.966425  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1009 19:29:54.974761  343307 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1009 19:29:54.978501  343307 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1009 19:29:54.987786  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:29:55.037480  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:29:55.060963  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:29:55.082145  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:29:55.105188  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 19:29:55.128516  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:29:55.149252  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:29:55.172354  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:29:55.193857  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 19:29:55.219080  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:29:55.237634  343307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 19:29:55.256720  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1009 19:29:55.279349  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1009 19:29:55.298083  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1009 19:29:55.312857  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1009 19:29:55.328467  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1009 19:29:55.343367  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1009 19:29:55.357598  343307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1009 19:29:55.374321  343307 ssh_runner.go:195] Run: openssl version
	I1009 19:29:55.380839  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 19:29:55.389522  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 19:29:55.394545  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 19:29:55.394618  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 19:29:55.437345  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:29:55.447436  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:29:55.456198  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:29:55.460194  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:29:55.460288  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:29:55.502457  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:29:55.511155  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 19:29:55.519603  343307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 19:29:55.523571  343307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 19:29:55.523682  343307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 19:29:55.565661  343307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 19:29:55.575332  343307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:29:55.579545  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:29:55.620938  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:29:55.663052  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:29:55.708075  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:29:55.749078  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:29:55.800791  343307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:29:55.844259  343307 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1009 19:29:55.844433  343307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-807463-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-807463 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:29:55.844463  343307 kube-vip.go:115] generating kube-vip config ...
	I1009 19:29:55.844514  343307 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 19:29:55.857076  343307 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:29:55.857168  343307 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:29:55.857232  343307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:29:55.865620  343307 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:29:55.865690  343307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1009 19:29:55.873976  343307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 19:29:55.888496  343307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:29:55.902132  343307 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1009 19:29:55.918614  343307 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:29:55.922408  343307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:29:55.932872  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:29:56.078754  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:29:56.098490  343307 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:29:56.098835  343307 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:29:56.102467  343307 out.go:179] * Verifying Kubernetes components...
	I1009 19:29:56.105295  343307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:29:56.244415  343307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:29:56.260645  343307 kapi.go:59] client config for ha-807463: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/ha-807463/client.key", CAFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1009 19:29:56.260766  343307 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1009 19:29:56.261025  343307 node_ready.go:35] waiting up to 6m0s for node "ha-807463-m03" to be "Ready" ...
	W1009 19:29:58.265441  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:00.338043  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:02.766376  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:05.271576  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:07.765013  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:09.766174  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:12.268909  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:14.764872  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:16.768216  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:19.265861  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:21.764655  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:23.765433  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:26.265822  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:28.267509  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:30.765442  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:33.266200  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:35.765625  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:38.265302  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:40.265407  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:42.270313  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:44.765053  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:47.264227  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:49.264310  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:51.264693  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:53.266262  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:55.765430  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:30:57.765657  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:00.296961  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:02.765162  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:05.265758  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:07.270661  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:09.764829  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:11.766346  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:14.265615  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:16.765212  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:19.264362  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:21.265737  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:23.765070  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:26.265524  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:28.764786  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:30.765098  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:33.265489  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:35.270526  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:37.764838  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:40.265487  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:42.765053  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:45.269843  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:47.765589  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:49.766098  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:52.274275  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:54.765171  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:57.265540  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:31:59.265763  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:01.270860  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:03.765024  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:06.265424  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:08.766290  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:10.766762  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:13.264661  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:15.265789  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:17.765441  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:19.765504  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:22.269835  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:24.764880  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:26.764993  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:28.765201  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:30.765672  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:33.269831  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:35.271203  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:37.764975  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:39.765423  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:42.271235  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:44.765366  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:47.264895  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:49.267101  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:51.764961  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:53.765546  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:55.765910  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:32:58.272156  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:00.765521  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:03.265015  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:05.265319  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:07.764930  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:09.765819  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:12.270731  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:14.764917  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:16.765423  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:19.265783  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:21.268655  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:23.764590  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:25.765798  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:28.266110  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:30.765102  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:33.272016  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:35.765481  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:38.266269  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:40.268920  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:42.764575  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:44.765157  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:47.271446  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:49.764820  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:51.765204  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:54.271178  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:56.765244  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:33:59.264746  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:01.265757  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:03.266309  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:05.765330  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:08.271832  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:10.764901  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:13.271000  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:15.764750  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:18.271187  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:20.764309  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:22.764554  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:24.765015  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:27.265491  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:29.269747  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:31.765383  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:34.265977  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:36.271158  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:38.764726  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:41.269997  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:43.765647  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:46.264806  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:48.264841  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:50.265171  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:52.273405  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:54.764904  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:56.772617  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:34:59.264570  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:01.266121  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:03.764578  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:05.765062  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:07.765743  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:10.264753  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:12.267514  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:14.271366  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:16.764238  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:18.764646  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:21.264582  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:23.765647  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:26.265493  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:28.765534  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:31.266108  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:33.271209  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:35.765495  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:38.264544  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:40.265777  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:42.765010  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:45.320159  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:47.765477  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:50.267171  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:52.764971  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	W1009 19:35:54.765424  343307 node_ready.go:57] node "ha-807463-m03" has "Ready":"Unknown" status (will retry)
	I1009 19:35:56.261403  343307 node_ready.go:38] duration metric: took 6m0.00032425s for node "ha-807463-m03" to be "Ready" ...
	I1009 19:35:56.264406  343307 out.go:203] 
	W1009 19:35:56.267318  343307 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:35:56.267354  343307 out.go:285] * 
	W1009 19:35:56.269757  343307 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:35:56.272075  343307 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:28:13 ha-807463 crio[664]: time="2025-10-09T19:28:13.304067783Z" level=info msg="Started container" PID=1189 containerID=0e94c30541006adea7a9cf430df1905830797b4065898a1ff96a0a8704efcde5 description=kube-system/coredns-66bc5c9577-tswbs/coredns id=bd231ca3-3cb5-417c-a27f-e7e210bd2614 name=/runtime.v1.RuntimeService/StartContainer sandboxID=215954c6e5b58ec4e1876606af4120f74fa1b735788f97d908b617d088e10218
	Oct 09 19:28:43 ha-807463 conmon[1165]: conmon 49b67bb8cba0ee99aca2 <ninfo>: container 1170 exited with status 1
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.956113094Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b2497cc7-982c-4437-8e10-8451b3daa825 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.957275756Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ea766a4c-b850-4d02-b94c-15910e120466 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.958652007Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=441b7fe6-c8e8-4480-a875-e58f7cbbc12c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.958885919Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.970702736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.970987881Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/934ecdd5b34159ac9e9805425bf47a7191ad8753b0f07efbbd463b24fea61539/merged/etc/passwd: no such file or directory"
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.971020948Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/934ecdd5b34159ac9e9805425bf47a7191ad8753b0f07efbbd463b24fea61539/merged/etc/group: no such file or directory"
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.971300818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:28:43 ha-807463 crio[664]: time="2025-10-09T19:28:43.999493974Z" level=info msg="Created container 1416e569d8f8fe0cb15febba45212fdd6fb1718a9812f18587def66caefda3e1: kube-system/storage-provisioner/storage-provisioner" id=441b7fe6-c8e8-4480-a875-e58f7cbbc12c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:28:44 ha-807463 crio[664]: time="2025-10-09T19:28:44.001575564Z" level=info msg="Starting container: 1416e569d8f8fe0cb15febba45212fdd6fb1718a9812f18587def66caefda3e1" id=2fe204ab-fca6-41e1-b709-a74e76e04d48 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:28:44 ha-807463 crio[664]: time="2025-10-09T19:28:44.008428394Z" level=info msg="Started container" PID=1408 containerID=1416e569d8f8fe0cb15febba45212fdd6fb1718a9812f18587def66caefda3e1 description=kube-system/storage-provisioner/storage-provisioner id=2fe204ab-fca6-41e1-b709-a74e76e04d48 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7eb46fc741382f55fe16d9dcb41b62c8d30783b6fa783d2d33a2516785da8030
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.522067171Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.525680672Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.525854736Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.525929903Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.529201175Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.52923686Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.529253697Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.534099464Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.534352454Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.534487904Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.538988916Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 19:28:53 ha-807463 crio[664]: time="2025-10-09T19:28:53.539025699Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	1416e569d8f8f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   7eb46fc741382       storage-provisioner                 kube-system
	0e94c30541006       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   215954c6e5b58       coredns-66bc5c9577-tswbs            kube-system
	9adc2cdd19000       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   55085f7167d14       kindnet-rc46j                       kube-system
	49b67bb8cba0e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   7eb46fc741382       storage-provisioner                 kube-system
	dc6736e2d83ca       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   7 minutes ago       Running             kube-vip                  1                   e1b7344c7d94c       kube-vip-ha-807463                  kube-system
	ca7bc93dc4dcf       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   833e6871e62e2       coredns-66bc5c9577-vkzgf            kube-system
	38276ddd00795       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   3daf554657528       busybox-7b57f96db7-5z2cl            default
	9f1fd2b441bae       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   77fe5d534a437       kube-proxy-b84dn                    kube-system
	71e4e3ae2d80c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Running             kube-controller-manager   2                   5d2bd7a9c54dd       kube-controller-manager-ha-807463   kube-system
	9d475a483e702       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            1                   4ee70f1fb5f58       kube-apiserver-ha-807463            kube-system
	eb3eb3edb2fff       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   1                   5d2bd7a9c54dd       kube-controller-manager-ha-807463   kube-system
	e4593fb70e6dd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   2d270c8563e10       kube-scheduler-ha-807463            kube-system
	60abd5bf9ea13       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Exited              kube-vip                  0                   e1b7344c7d94c       kube-vip-ha-807463                  kube-system
	4477522bd8536       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   a372bed836bce       etcd-ha-807463                      kube-system
	
	
	==> coredns [0e94c30541006adea7a9cf430df1905830797b4065898a1ff96a0a8704efcde5] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35674 - 64583 "HINFO IN 6906546124599759769.4081405551742000183. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033625487s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [ca7bc93dc4dcf853db34af69a749d22d607d653f5e3ef5777c55ac602fd2a298] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56146 - 65116 "HINFO IN 9083642706827740027.9059612721108159707. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020353957s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-807463
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-807463
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=ha-807463
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_22_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:22:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-807463
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:36:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:34:40 +0000   Thu, 09 Oct 2025 19:22:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:34:40 +0000   Thu, 09 Oct 2025 19:22:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:34:40 +0000   Thu, 09 Oct 2025 19:22:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:34:40 +0000   Thu, 09 Oct 2025 19:22:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-807463
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 97aacaba546643c3a96be1e87893b40c
	  System UUID:                97caddd7-ad20-4ad3-87a9-90a149a84db2
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-5z2cl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-tswbs             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-vkzgf             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-807463                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-rc46j                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-807463             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-807463    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-b84dn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-807463             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-807463                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m56s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)      kubelet          Node ha-807463 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-807463 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-807463 status is now: NodeHasSufficientMemory
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-807463 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-807463 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-807463 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-807463 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	  Normal   RegisteredNode           8m54s                  node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	  Normal   Starting                 8m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m32s (x8 over 8m32s)  kubelet          Node ha-807463 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m32s (x8 over 8m32s)  kubelet          Node ha-807463 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m32s (x8 over 8m32s)  kubelet          Node ha-807463 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m55s                  node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	  Normal   RegisteredNode           7m45s                  node-controller  Node ha-807463 event: Registered Node ha-807463 in Controller
	
	
	Name:               ha-807463-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-807463-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=ha-807463
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_09T19_23_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:23:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-807463-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:36:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:33:46 +0000   Thu, 09 Oct 2025 19:23:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:33:46 +0000   Thu, 09 Oct 2025 19:23:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:33:46 +0000   Thu, 09 Oct 2025 19:23:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:33:46 +0000   Thu, 09 Oct 2025 19:23:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-807463-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 615ff68d05a240648cf06e5cd58bdb14
	  System UUID:                4a17c7be-c74f-481f-8bf2-76a62cd3a90f
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-xqc7g                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-807463-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-gvpmq                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-807463-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-807463-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-7lpbk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-807463-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-807463-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m47s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   RegisteredNode           12m                    node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	  Normal   NodeHasSufficientPID     9m34s (x8 over 9m34s)  kubelet          Node ha-807463-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m34s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m34s (x8 over 9m34s)  kubelet          Node ha-807463-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m34s (x8 over 9m34s)  kubelet          Node ha-807463-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           8m54s                  node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	  Normal   Starting                 8m29s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m28s (x8 over 8m29s)  kubelet          Node ha-807463-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m28s (x8 over 8m29s)  kubelet          Node ha-807463-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m28s (x8 over 8m29s)  kubelet          Node ha-807463-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m55s                  node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	  Normal   RegisteredNode           7m45s                  node-controller  Node ha-807463-m02 event: Registered Node ha-807463-m02 in Controller
	
	
	Name:               ha-807463-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-807463-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=ha-807463
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_09T19_25_45_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:25:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-807463-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:26:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 09 Oct 2025 19:25:58 +0000   Thu, 09 Oct 2025 19:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 09 Oct 2025 19:25:58 +0000   Thu, 09 Oct 2025 19:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 09 Oct 2025 19:25:58 +0000   Thu, 09 Oct 2025 19:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 09 Oct 2025 19:25:58 +0000   Thu, 09 Oct 2025 19:29:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-807463-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc067731848740afab5ce03812f74006
	  System UUID:                0f2358b6-a095-45f9-8a33-badc490163a8
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bc8tf       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-2lp2p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-807463-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-807463-m04 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-807463-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-807463-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m54s              node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   RegisteredNode           7m55s              node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   RegisteredNode           7m45s              node-controller  Node ha-807463-m04 event: Registered Node ha-807463-m04 in Controller
	  Normal   NodeNotReady             7m5s               node-controller  Node ha-807463-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct 9 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015195] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.531968] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036847] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.757016] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.932356] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 9 18:02] hrtimer: interrupt took 20603549 ns
	[Oct 9 18:59] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 9 19:02] overlayfs: idmapped layers are currently not supported
	[  +0.066862] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 9 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:14] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 9 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:23] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:25] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:27] overlayfs: idmapped layers are currently not supported
	[  +3.297009] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:28] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4477522bd8536fe09afcc2397cd8beb927ccd19a6714098fb7bb1f3ef47595ea] <==
	{"level":"warn","ts":"2025-10-09T19:35:49.704512Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:49.704562Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:49.739178Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"95a22811bdce1330","rtt":"154.736798ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:49.746917Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"95a22811bdce1330","rtt":"173.409777ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:53.706247Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:53.706306Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:54.739992Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"95a22811bdce1330","rtt":"154.736798ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:54.748135Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"95a22811bdce1330","rtt":"173.409777ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:57.707713Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:57.707776Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"95a22811bdce1330","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:59.740166Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"95a22811bdce1330","rtt":"154.736798ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:35:59.749411Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"95a22811bdce1330","rtt":"173.409777ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-09T19:36:00.169034Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.228504ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-807463-m03\" limit:1 ","response":"range_response_count:1 size:6578"}
	{"level":"info","ts":"2025-10-09T19:36:00.169138Z","caller":"traceutil/trace.go:172","msg":"trace[1509218737] range","detail":"{range_begin:/registry/minions/ha-807463-m03; range_end:; response_count:1; response_revision:2994; }","duration":"141.331226ms","start":"2025-10-09T19:36:00.027768Z","end":"2025-10-09T19:36:00.169100Z","steps":["trace[1509218737] 'agreement among raft nodes before linearized reading'  (duration: 38.564143ms)","trace[1509218737] 'range keys from bolt db'  (duration: 102.603535ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-09T19:36:00.944560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:44124","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-09T19:36:01.023009Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12478773398777683558 12593026477526642892)"}
	{"level":"info","ts":"2025-10-09T19:36:01.027817Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"95a22811bdce1330","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-10-09T19:36:01.027877Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"95a22811bdce1330"}
	{"level":"info","ts":"2025-10-09T19:36:01.027910Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"95a22811bdce1330"}
	{"level":"info","ts":"2025-10-09T19:36:01.027932Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"95a22811bdce1330"}
	{"level":"info","ts":"2025-10-09T19:36:01.027948Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"95a22811bdce1330"}
	{"level":"info","ts":"2025-10-09T19:36:01.027981Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"95a22811bdce1330"}
	{"level":"info","ts":"2025-10-09T19:36:01.027997Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"95a22811bdce1330"}
	{"level":"info","ts":"2025-10-09T19:36:01.028004Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"95a22811bdce1330"}
	{"level":"info","ts":"2025-10-09T19:36:01.028015Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"95a22811bdce1330"}
	
	
	==> kernel <==
	 19:36:10 up  2:18,  0 user,  load average: 1.15, 1.31, 1.56
	Linux ha-807463 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9adc2cdd19000926b9c7696c7b7924afabffb77a3346b0bea81bc99d3f74aa0f] <==
	I1009 19:35:33.521332       1 main.go:324] Node ha-807463-m03 has CIDR [10.244.2.0/24] 
	I1009 19:35:33.521390       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1009 19:35:33.521401       1 main.go:324] Node ha-807463-m04 has CIDR [10.244.3.0/24] 
	I1009 19:35:43.527640       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:35:43.527674       1 main.go:301] handling current node
	I1009 19:35:43.527690       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1009 19:35:43.527696       1 main.go:324] Node ha-807463-m02 has CIDR [10.244.1.0/24] 
	I1009 19:35:43.527890       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1009 19:35:43.527932       1 main.go:324] Node ha-807463-m03 has CIDR [10.244.2.0/24] 
	I1009 19:35:43.528029       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1009 19:35:43.528041       1 main.go:324] Node ha-807463-m04 has CIDR [10.244.3.0/24] 
	I1009 19:35:53.521146       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:35:53.521182       1 main.go:301] handling current node
	I1009 19:35:53.521198       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1009 19:35:53.521204       1 main.go:324] Node ha-807463-m02 has CIDR [10.244.1.0/24] 
	I1009 19:35:53.521367       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1009 19:35:53.521387       1 main.go:324] Node ha-807463-m03 has CIDR [10.244.2.0/24] 
	I1009 19:35:53.521449       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1009 19:35:53.521462       1 main.go:324] Node ha-807463-m04 has CIDR [10.244.3.0/24] 
	I1009 19:36:03.521444       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1009 19:36:03.521477       1 main.go:324] Node ha-807463-m04 has CIDR [10.244.3.0/24] 
	I1009 19:36:03.521843       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:36:03.521923       1 main.go:301] handling current node
	I1009 19:36:03.521942       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1009 19:36:03.521948       1 main.go:324] Node ha-807463-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [9d475a483e7023b214d8a1506f2ba793d2cb34e4e0e7b5f0fc49d91b875116f7] <==
	E1009 19:28:12.347613       1 watcher.go:335] watch chan error: etcdserver: no leader
	E1009 19:28:12.347635       1 watcher.go:335] watch chan error: etcdserver: no leader
	E1009 19:28:12.349427       1 watcher.go:335] watch chan error: etcdserver: no leader
	E1009 19:28:12.349481       1 watcher.go:335] watch chan error: etcdserver: no leader
	{"level":"warn","ts":"2025-10-09T19:28:12.355259Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355532Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b2d2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355680Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001b2d2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355754Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355815Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400046cb40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355846Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400126d680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355882Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355910Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000e925a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.355975Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001959680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.357750Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001959680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.357862Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400126cb40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	E1009 19:28:12.360026       1 watcher.go:335] watch chan error: etcdserver: no leader
	E1009 19:28:12.360254       1 watcher.go:335] watch chan error: etcdserver: no leader
	{"level":"warn","ts":"2025-10-09T19:28:12.373075Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.373191Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-09T19:28:12.373230Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40013b72c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	I1009 19:28:12.408675       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1009 19:28:13.946618       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3 192.168.49.4]
	I1009 19:28:15.990769       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 19:28:16.088830       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:28:22.340287       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [71e4e3ae2d80c0bff2e415aa94adbf172f0541a980a58bc060eaf4114ebfa411] <==
	I1009 19:28:15.790006       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-807463-m04"
	I1009 19:28:15.790043       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-807463"
	I1009 19:28:15.790073       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-807463-m02"
	I1009 19:28:15.792034       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 19:28:15.792087       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1009 19:28:15.816012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 19:28:15.816096       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 19:28:15.816262       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:28:15.816289       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 19:28:15.816340       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1009 19:28:15.816365       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 19:28:15.853279       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 19:28:15.853454       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1009 19:28:15.853525       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1009 19:28:15.853569       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1009 19:28:15.900161       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:28:15.900708       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:28:15.900757       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:28:15.900945       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:28:45.936143       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-f6lp8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-f6lp8\": the object has been modified; please apply your changes to the latest version and try again"
	I1009 19:28:45.936767       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"d7915f4a-fefa-4618-a648-059d33b61abc", APIVersion:"v1", ResourceVersion:"291", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-f6lp8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-f6lp8": the object has been modified; please apply your changes to the latest version and try again
	I1009 19:34:15.936504       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-99qlt"
	E1009 19:34:16.137977       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1009 19:36:01.577191       1 garbagecollector.go:360] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-807463-m03\", UID:\"ff3d2082-0b19-486f-bf15-ebb70544cffc\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{_:sync.noC
opy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-807463-m03\", UID:\"ee3912b8-8841-45c0-9a4d-6e7b3ad8f5ce\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-807463-m03\" not found" logger="UnhandledError"
	E1009 19:36:01.621414       1 garbagecollector.go:360] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"storage.k8s.io/v1\", Kind:\"CSINode\", Name:\"ha-807463-m03\", UID:\"b4065252-cfe5-42ae-b4c2-b21091f1a081\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mut
ex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-807463-m03\", UID:\"ee3912b8-8841-45c0-9a4d-6e7b3ad8f5ce\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io \"ha-807463-m03\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [eb3eb3edb2fff30f90b98210a15c7960a0d8f4700c380a4bc2a236e3530d4043] <==
	I1009 19:27:40.800035       1 serving.go:386] Generated self-signed cert in-memory
	I1009 19:27:45.392772       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1009 19:27:45.392919       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:27:45.408597       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1009 19:27:45.408878       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:27:45.409007       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1009 19:27:45.409053       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1009 19:28:00.394482       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the reques
t from succeeding"
	
	
	==> kube-proxy [9f1fd2b441bae8a1e1677da06354cd58eb9120cf79ae41fd89aade0d9e36317b] <==
	I1009 19:28:13.524866       1 server_linux.go:53] "Using iptables proxy"
	I1009 19:28:13.683998       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:28:13.785200       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:28:13.785297       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1009 19:28:13.785401       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:28:13.850524       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 19:28:13.850775       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:28:13.858532       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:28:13.859447       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:28:13.859472       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:28:13.869614       1 config.go:200] "Starting service config controller"
	I1009 19:28:13.869702       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:28:13.869759       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:28:13.869806       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:28:13.869854       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:28:13.869903       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:28:13.870681       1 config.go:309] "Starting node config controller"
	I1009 19:28:13.870751       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:28:13.870783       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:28:13.977741       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:28:13.979480       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:28:13.979510       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e4593fb70e6dd0047bc83f89897d4c1ad23896e5ca9a3628c4bbeea360f8cbaf] <==
	E1009 19:27:48.441390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 19:27:48.441455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 19:27:48.441529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 19:27:48.441597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 19:27:48.441717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 19:27:48.441800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 19:27:48.441887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1009 19:27:48.441935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 19:27:49.269919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 19:27:49.288585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 19:27:49.311114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 19:27:49.371959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1009 19:27:49.404581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 19:27:49.410730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 19:27:49.410883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 19:27:49.418641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 19:27:49.443744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1009 19:27:49.470207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 19:27:49.520778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1009 19:27:49.544432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 19:27:49.566871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 19:27:49.622487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1009 19:27:49.659599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 19:27:49.667074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1009 19:27:51.424577       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.615218     800 apiserver.go:52] "Watching apiserver"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.620110     800 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-807463" podUID="2851b5b6-b28e-4749-8fba-920501dc7be3"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.622751     800 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663228     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b9e8a81e-2bee-4542-b231-7490dfbf6065-tmp\") pod \"storage-provisioner\" (UID: \"b9e8a81e-2bee-4542-b231-7490dfbf6065\") " pod="kube-system/storage-provisioner"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663304     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c10ee5e-8408-4b6f-985a-8d4f44a869cc-xtables-lock\") pod \"kube-proxy-b84dn\" (UID: \"9c10ee5e-8408-4b6f-985a-8d4f44a869cc\") " pod="kube-system/kube-proxy-b84dn"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663360     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/22f58fe4-1d11-4259-b9f9-e8740b8b2257-cni-cfg\") pod \"kindnet-rc46j\" (UID: \"22f58fe4-1d11-4259-b9f9-e8740b8b2257\") " pod="kube-system/kindnet-rc46j"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663389     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c10ee5e-8408-4b6f-985a-8d4f44a869cc-lib-modules\") pod \"kube-proxy-b84dn\" (UID: \"9c10ee5e-8408-4b6f-985a-8d4f44a869cc\") " pod="kube-system/kube-proxy-b84dn"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663421     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22f58fe4-1d11-4259-b9f9-e8740b8b2257-xtables-lock\") pod \"kindnet-rc46j\" (UID: \"22f58fe4-1d11-4259-b9f9-e8740b8b2257\") " pod="kube-system/kindnet-rc46j"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.663440     800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22f58fe4-1d11-4259-b9f9-e8740b8b2257-lib-modules\") pod \"kindnet-rc46j\" (UID: \"22f58fe4-1d11-4259-b9f9-e8740b8b2257\") " pod="kube-system/kindnet-rc46j"
	Oct 09 19:27:50 ha-807463 kubelet[800]: I1009 19:27:50.667816     800 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="976e04e1cbea4b516ead31d4a83e047c" path="/var/lib/kubelet/pods/976e04e1cbea4b516ead31d4a83e047c/volumes"
	Oct 09 19:28:00 ha-807463 kubelet[800]: I1009 19:28:00.774505     800 scope.go:117] "RemoveContainer" containerID="eb3eb3edb2fff30f90b98210a15c7960a0d8f4700c380a4bc2a236e3530d4043"
	Oct 09 19:28:10 ha-807463 kubelet[800]: E1009 19:28:10.305261     800 controller.go:195] "Failed to update lease" err="Put \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-807463?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Oct 09 19:28:10 ha-807463 kubelet[800]: E1009 19:28:10.446000     800 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-09T19:28:00Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-09T19:28:00Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-09T19:28:00Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-09T19:28:00Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"re
cursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"ha-807463\": Patch \"https://192.168.49.2:8443/api/v1/nodes/ha-807463/status?timeout=10s\": context deadline exceeded"
	Oct 09 19:28:12 ha-807463 kubelet[800]: I1009 19:28:12.468697     800 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 09 19:28:12 ha-807463 kubelet[800]: I1009 19:28:12.552182     800 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-807463"
	Oct 09 19:28:12 ha-807463 kubelet[800]: I1009 19:28:12.552222     800 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-807463"
	Oct 09 19:28:12 ha-807463 kubelet[800]: W1009 19:28:12.667154     800 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/crio-3daf554657528d08ab602a2eafcc6211b760b3734a78136296b70f4b7a32baf0 WatchSource:0}: Error finding container 3daf554657528d08ab602a2eafcc6211b760b3734a78136296b70f4b7a32baf0: Status 404 returned error can't find the container with id 3daf554657528d08ab602a2eafcc6211b760b3734a78136296b70f4b7a32baf0
	Oct 09 19:28:12 ha-807463 kubelet[800]: W1009 19:28:12.708992     800 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/crio-833e6871e62e2720786472951e1248b710ee0b6ab3e58c51a072c96c41234008 WatchSource:0}: Error finding container 833e6871e62e2720786472951e1248b710ee0b6ab3e58c51a072c96c41234008: Status 404 returned error can't find the container with id 833e6871e62e2720786472951e1248b710ee0b6ab3e58c51a072c96c41234008
	Oct 09 19:28:12 ha-807463 kubelet[800]: I1009 19:28:12.824883     800 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-807463" podUID="2851b5b6-b28e-4749-8fba-920501dc7be3"
	Oct 09 19:28:12 ha-807463 kubelet[800]: I1009 19:28:12.854312     800 scope.go:117] "RemoveContainer" containerID="60abd5bf9ea13b7e15b4cb133643cb620ae0f536d45d6ac30703be2e3ef7a45f"
	Oct 09 19:28:13 ha-807463 kubelet[800]: W1009 19:28:13.100847     800 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/crio-215954c6e5b58ec4e1876606af4120f74fa1b735788f97d908b617d088e10218 WatchSource:0}: Error finding container 215954c6e5b58ec4e1876606af4120f74fa1b735788f97d908b617d088e10218: Status 404 returned error can't find the container with id 215954c6e5b58ec4e1876606af4120f74fa1b735788f97d908b617d088e10218
	Oct 09 19:28:13 ha-807463 kubelet[800]: I1009 19:28:13.258189     800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-807463" podStartSLOduration=1.258171868 podStartE2EDuration="1.258171868s" podCreationTimestamp="2025-10-09 19:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 19:28:13.202642686 +0000 UTC m=+34.717581260" watchObservedRunningTime="2025-10-09 19:28:13.258171868 +0000 UTC m=+34.773110434"
	Oct 09 19:28:38 ha-807463 kubelet[800]: E1009 19:28:38.614610     800 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa\": container with ID starting with 75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa not found: ID does not exist" containerID="75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa"
	Oct 09 19:28:38 ha-807463 kubelet[800]: I1009 19:28:38.614682     800 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa" err="rpc error: code = NotFound desc = could not find container \"75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa\": container with ID starting with 75a5236150873d2e47f94fa0ec7a3606e1bb185ee804c71cf7aaaaeb1a9af3aa not found: ID does not exist"
	Oct 09 19:28:43 ha-807463 kubelet[800]: I1009 19:28:43.955424     800 scope.go:117] "RemoveContainer" containerID="49b67bb8cba0ee99aca2811ac91734a84329f896cb75fab3ad456d53105ce0a1"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-807463 -n ha-807463
helpers_test.go:269: (dbg) Run:  kubectl --context ha-807463 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-hm827
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-807463 describe pod busybox-7b57f96db7-hm827
helpers_test.go:290: (dbg) kubectl --context ha-807463 describe pod busybox-7b57f96db7-hm827:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-hm827
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d8g9g (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-d8g9g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  115s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  115s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  11s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  11s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.21s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.52s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-389165 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-389165 --output=json --user=testUser: exit status 80 (2.51728239s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"29e4c95d-aa4e-4e79-bc9e-2c0a43d4efa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-389165 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"49bf97c4-8898-4741-b1f4-3ab95e5ec5fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-09T19:41:01Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"eb320fe0-ca9c-4124-8e7a-4ca59d06a9e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-389165 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.83s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-389165 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-389165 --output=json --user=testUser: exit status 80 (1.826234504s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"445e19d9-a811-417b-9806-d49f0905a4ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-389165 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"559d8ad7-226d-4768-b3d1-e566cbfebd2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-09T19:41:03Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"d8607362-d97c-4d3f-ba89-3a69c0f3259b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-389165 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.83s)

                                                
                                    
x
+
TestPause/serial/Pause (6.92s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-383163 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-383163 --alsologtostderr -v=5: exit status 80 (1.813001268s)

                                                
                                                
-- stdout --
	* Pausing node pause-383163 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 20:03:35.145397  457717 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:03:35.146241  457717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:03:35.146257  457717 out.go:374] Setting ErrFile to fd 2...
	I1009 20:03:35.146262  457717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:03:35.146568  457717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:03:35.146840  457717 out.go:368] Setting JSON to false
	I1009 20:03:35.146871  457717 mustload.go:65] Loading cluster: pause-383163
	I1009 20:03:35.147304  457717 config.go:182] Loaded profile config "pause-383163": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:03:35.147882  457717 cli_runner.go:164] Run: docker container inspect pause-383163 --format={{.State.Status}}
	I1009 20:03:35.166082  457717 host.go:66] Checking if "pause-383163" exists ...
	I1009 20:03:35.166403  457717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:03:35.224858  457717 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 20:03:35.215256337 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:03:35.225692  457717 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-383163 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1009 20:03:35.230496  457717 out.go:179] * Pausing node pause-383163 ... 
	I1009 20:03:35.233270  457717 host.go:66] Checking if "pause-383163" exists ...
	I1009 20:03:35.233615  457717 ssh_runner.go:195] Run: systemctl --version
	I1009 20:03:35.233668  457717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:35.251685  457717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33391 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/pause-383163/id_rsa Username:docker}
	I1009 20:03:35.355790  457717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:03:35.368753  457717 pause.go:52] kubelet running: true
	I1009 20:03:35.368856  457717 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:03:35.588666  457717 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:03:35.588803  457717 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:03:35.663868  457717 cri.go:89] found id: "a167ee63efd53a4275e3e7873bad1603ebe7e8a31f0dd3198756d4b1f148e52a"
	I1009 20:03:35.663892  457717 cri.go:89] found id: "cb4576736bbda7b31792b910061f810e74ebfe1099b49efb9c81dfdd2a1f445b"
	I1009 20:03:35.663902  457717 cri.go:89] found id: "75d7f10be8c3e4bdde6c5890b28343819891d67d2e73eb06f38c47013ae3a3cb"
	I1009 20:03:35.663906  457717 cri.go:89] found id: "4a07552f3446603a46059c12e8713e08b798083b8d17d79c386bb391fc8c893c"
	I1009 20:03:35.663909  457717 cri.go:89] found id: "b9eb2f7f088ee645099c5cd4b8e1f669da2435cd313728f2bfbfc759ff9937b6"
	I1009 20:03:35.663913  457717 cri.go:89] found id: "a3e1d7ac8b25781dbad544fea22784db5fdb0f4de80670ff5f131dc3cc536739"
	I1009 20:03:35.663916  457717 cri.go:89] found id: "8b7b5b8265013e32789ed2351787ae158830229a757a1bc103a4456924b76035"
	I1009 20:03:35.663918  457717 cri.go:89] found id: "8ab6890c2164d0b6bbc82e2679dbd67b5dfe706686726cd94224aaf22c16f80f"
	I1009 20:03:35.663921  457717 cri.go:89] found id: "bb3480137661617835e8f2461eda88ed8e0afcd207648cb4d703a117457533cf"
	I1009 20:03:35.663928  457717 cri.go:89] found id: "5b2ba970850f91ec7dc47036664e17c95915f9f4e974dfe18f12c57f19dc05a3"
	I1009 20:03:35.663931  457717 cri.go:89] found id: "715315fe8199656e0b35e6405a491d6927104742238ac1c9811ad467110e9936"
	I1009 20:03:35.663934  457717 cri.go:89] found id: "f8690925bda20a05089b5b66d446d2a265402cbc16285b44139837240ca69a30"
	I1009 20:03:35.663937  457717 cri.go:89] found id: "5f2a4c1ed909bfd58b69d5787042aa91a6c0d43e3eef176ba2274c648fad521a"
	I1009 20:03:35.663940  457717 cri.go:89] found id: "a58cd421c4789d3b1e15645239af493035656079a6c5a7405c605212d4f12db9"
	I1009 20:03:35.663943  457717 cri.go:89] found id: ""
	I1009 20:03:35.663993  457717 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:03:35.675490  457717 retry.go:31] will retry after 197.407672ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:03:35Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:03:35.873960  457717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:03:35.888091  457717 pause.go:52] kubelet running: false
	I1009 20:03:35.888168  457717 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:03:36.050805  457717 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:03:36.050895  457717 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:03:36.124749  457717 cri.go:89] found id: "a167ee63efd53a4275e3e7873bad1603ebe7e8a31f0dd3198756d4b1f148e52a"
	I1009 20:03:36.124786  457717 cri.go:89] found id: "cb4576736bbda7b31792b910061f810e74ebfe1099b49efb9c81dfdd2a1f445b"
	I1009 20:03:36.124791  457717 cri.go:89] found id: "75d7f10be8c3e4bdde6c5890b28343819891d67d2e73eb06f38c47013ae3a3cb"
	I1009 20:03:36.124795  457717 cri.go:89] found id: "4a07552f3446603a46059c12e8713e08b798083b8d17d79c386bb391fc8c893c"
	I1009 20:03:36.124799  457717 cri.go:89] found id: "b9eb2f7f088ee645099c5cd4b8e1f669da2435cd313728f2bfbfc759ff9937b6"
	I1009 20:03:36.124803  457717 cri.go:89] found id: "a3e1d7ac8b25781dbad544fea22784db5fdb0f4de80670ff5f131dc3cc536739"
	I1009 20:03:36.124806  457717 cri.go:89] found id: "8b7b5b8265013e32789ed2351787ae158830229a757a1bc103a4456924b76035"
	I1009 20:03:36.124810  457717 cri.go:89] found id: "8ab6890c2164d0b6bbc82e2679dbd67b5dfe706686726cd94224aaf22c16f80f"
	I1009 20:03:36.124812  457717 cri.go:89] found id: "bb3480137661617835e8f2461eda88ed8e0afcd207648cb4d703a117457533cf"
	I1009 20:03:36.124845  457717 cri.go:89] found id: "5b2ba970850f91ec7dc47036664e17c95915f9f4e974dfe18f12c57f19dc05a3"
	I1009 20:03:36.124852  457717 cri.go:89] found id: "715315fe8199656e0b35e6405a491d6927104742238ac1c9811ad467110e9936"
	I1009 20:03:36.124856  457717 cri.go:89] found id: "f8690925bda20a05089b5b66d446d2a265402cbc16285b44139837240ca69a30"
	I1009 20:03:36.124859  457717 cri.go:89] found id: "5f2a4c1ed909bfd58b69d5787042aa91a6c0d43e3eef176ba2274c648fad521a"
	I1009 20:03:36.124862  457717 cri.go:89] found id: "a58cd421c4789d3b1e15645239af493035656079a6c5a7405c605212d4f12db9"
	I1009 20:03:36.124865  457717 cri.go:89] found id: ""
	I1009 20:03:36.124963  457717 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:03:36.137895  457717 retry.go:31] will retry after 490.240126ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:03:36Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:03:36.628416  457717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:03:36.643048  457717 pause.go:52] kubelet running: false
	I1009 20:03:36.643116  457717 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:03:36.790034  457717 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:03:36.790172  457717 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:03:36.878963  457717 cri.go:89] found id: "a167ee63efd53a4275e3e7873bad1603ebe7e8a31f0dd3198756d4b1f148e52a"
	I1009 20:03:36.879042  457717 cri.go:89] found id: "cb4576736bbda7b31792b910061f810e74ebfe1099b49efb9c81dfdd2a1f445b"
	I1009 20:03:36.879063  457717 cri.go:89] found id: "75d7f10be8c3e4bdde6c5890b28343819891d67d2e73eb06f38c47013ae3a3cb"
	I1009 20:03:36.879089  457717 cri.go:89] found id: "4a07552f3446603a46059c12e8713e08b798083b8d17d79c386bb391fc8c893c"
	I1009 20:03:36.879120  457717 cri.go:89] found id: "b9eb2f7f088ee645099c5cd4b8e1f669da2435cd313728f2bfbfc759ff9937b6"
	I1009 20:03:36.879131  457717 cri.go:89] found id: "a3e1d7ac8b25781dbad544fea22784db5fdb0f4de80670ff5f131dc3cc536739"
	I1009 20:03:36.879136  457717 cri.go:89] found id: "8b7b5b8265013e32789ed2351787ae158830229a757a1bc103a4456924b76035"
	I1009 20:03:36.879140  457717 cri.go:89] found id: "8ab6890c2164d0b6bbc82e2679dbd67b5dfe706686726cd94224aaf22c16f80f"
	I1009 20:03:36.879144  457717 cri.go:89] found id: "bb3480137661617835e8f2461eda88ed8e0afcd207648cb4d703a117457533cf"
	I1009 20:03:36.879158  457717 cri.go:89] found id: "5b2ba970850f91ec7dc47036664e17c95915f9f4e974dfe18f12c57f19dc05a3"
	I1009 20:03:36.879166  457717 cri.go:89] found id: "715315fe8199656e0b35e6405a491d6927104742238ac1c9811ad467110e9936"
	I1009 20:03:36.879169  457717 cri.go:89] found id: "f8690925bda20a05089b5b66d446d2a265402cbc16285b44139837240ca69a30"
	I1009 20:03:36.879173  457717 cri.go:89] found id: "5f2a4c1ed909bfd58b69d5787042aa91a6c0d43e3eef176ba2274c648fad521a"
	I1009 20:03:36.879178  457717 cri.go:89] found id: "a58cd421c4789d3b1e15645239af493035656079a6c5a7405c605212d4f12db9"
	I1009 20:03:36.879181  457717 cri.go:89] found id: ""
	I1009 20:03:36.879235  457717 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:03:36.893999  457717 out.go:203] 
	W1009 20:03:36.896924  457717 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:03:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:03:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 20:03:36.897008  457717 out.go:285] * 
	* 
	W1009 20:03:36.902707  457717 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:03:36.905699  457717 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-383163 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-383163
helpers_test.go:243: (dbg) docker inspect pause-383163:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b864bacdbf740035af5526f14a34657026d770c3700f8ba7f8bb641f5902864",
	        "Created": "2025-10-09T20:01:53.600639875Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 451656,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:01:53.679652006Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/5b864bacdbf740035af5526f14a34657026d770c3700f8ba7f8bb641f5902864/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b864bacdbf740035af5526f14a34657026d770c3700f8ba7f8bb641f5902864/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b864bacdbf740035af5526f14a34657026d770c3700f8ba7f8bb641f5902864/hosts",
	        "LogPath": "/var/lib/docker/containers/5b864bacdbf740035af5526f14a34657026d770c3700f8ba7f8bb641f5902864/5b864bacdbf740035af5526f14a34657026d770c3700f8ba7f8bb641f5902864-json.log",
	        "Name": "/pause-383163",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-383163:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-383163",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5b864bacdbf740035af5526f14a34657026d770c3700f8ba7f8bb641f5902864",
	                "LowerDir": "/var/lib/docker/overlay2/26d4f9b68a6ffc726dd4e3fe961e65b60a4439463009444074acbaa166b8fae8-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/26d4f9b68a6ffc726dd4e3fe961e65b60a4439463009444074acbaa166b8fae8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/26d4f9b68a6ffc726dd4e3fe961e65b60a4439463009444074acbaa166b8fae8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/26d4f9b68a6ffc726dd4e3fe961e65b60a4439463009444074acbaa166b8fae8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-383163",
	                "Source": "/var/lib/docker/volumes/pause-383163/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-383163",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-383163",
	                "name.minikube.sigs.k8s.io": "pause-383163",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb5ce84d03467f28a9daf68107a371dc28f5dc96c9a5c184625f5a0d3eac44e8",
	            "SandboxKey": "/var/run/docker/netns/eb5ce84d0346",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-383163": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:88:3b:e9:01:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ed7ad6a5b4e1cfaa878c0b11825ac42d5f0a339e60392d3e4a9c05ec240e619a",
	                    "EndpointID": "f4f17eaa70d22135d3163a12c969128ec19389180d914fdac7f64a3ca204f037",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-383163",
	                        "5b864bacdbf7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-383163 -n pause-383163
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-383163 -n pause-383163: exit status 2 (464.597396ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-383163 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-383163 logs -n 25: (1.499859508s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-965213 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:57 UTC │ 09 Oct 25 19:57 UTC │
	│ start   │ -p missing-upgrade-917803 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-917803    │ jenkins │ v1.32.0 │ 09 Oct 25 19:57 UTC │ 09 Oct 25 19:58 UTC │
	│ start   │ -p NoKubernetes-965213 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:57 UTC │ 09 Oct 25 19:58 UTC │
	│ delete  │ -p NoKubernetes-965213                                                                                                                   │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │ 09 Oct 25 19:58 UTC │
	│ start   │ -p NoKubernetes-965213 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │ 09 Oct 25 19:58 UTC │
	│ ssh     │ -p NoKubernetes-965213 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │                     │
	│ stop    │ -p NoKubernetes-965213                                                                                                                   │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │ 09 Oct 25 19:58 UTC │
	│ start   │ -p NoKubernetes-965213 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │ 09 Oct 25 19:58 UTC │
	│ ssh     │ -p NoKubernetes-965213 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │                     │
	│ delete  │ -p NoKubernetes-965213                                                                                                                   │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │ 09 Oct 25 19:58 UTC │
	│ start   │ -p kubernetes-upgrade-164946 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-164946 │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │ 09 Oct 25 19:59 UTC │
	│ start   │ -p missing-upgrade-917803 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-917803    │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │ 09 Oct 25 19:59 UTC │
	│ stop    │ -p kubernetes-upgrade-164946                                                                                                             │ kubernetes-upgrade-164946 │ jenkins │ v1.37.0 │ 09 Oct 25 19:59 UTC │ 09 Oct 25 19:59 UTC │
	│ start   │ -p kubernetes-upgrade-164946 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-164946 │ jenkins │ v1.37.0 │ 09 Oct 25 19:59 UTC │                     │
	│ delete  │ -p missing-upgrade-917803                                                                                                                │ missing-upgrade-917803    │ jenkins │ v1.37.0 │ 09 Oct 25 19:59 UTC │ 09 Oct 25 19:59 UTC │
	│ start   │ -p stopped-upgrade-265052 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-265052    │ jenkins │ v1.32.0 │ 09 Oct 25 19:59 UTC │ 09 Oct 25 20:00 UTC │
	│ stop    │ stopped-upgrade-265052 stop                                                                                                              │ stopped-upgrade-265052    │ jenkins │ v1.32.0 │ 09 Oct 25 20:00 UTC │ 09 Oct 25 20:00 UTC │
	│ start   │ -p stopped-upgrade-265052 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-265052    │ jenkins │ v1.37.0 │ 09 Oct 25 20:00 UTC │ 09 Oct 25 20:00 UTC │
	│ delete  │ -p stopped-upgrade-265052                                                                                                                │ stopped-upgrade-265052    │ jenkins │ v1.37.0 │ 09 Oct 25 20:00 UTC │ 09 Oct 25 20:00 UTC │
	│ start   │ -p running-upgrade-055303 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-055303    │ jenkins │ v1.32.0 │ 09 Oct 25 20:00 UTC │ 09 Oct 25 20:01 UTC │
	│ start   │ -p running-upgrade-055303 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-055303    │ jenkins │ v1.37.0 │ 09 Oct 25 20:01 UTC │ 09 Oct 25 20:01 UTC │
	│ delete  │ -p running-upgrade-055303                                                                                                                │ running-upgrade-055303    │ jenkins │ v1.37.0 │ 09 Oct 25 20:01 UTC │ 09 Oct 25 20:01 UTC │
	│ start   │ -p pause-383163 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-383163              │ jenkins │ v1.37.0 │ 09 Oct 25 20:01 UTC │ 09 Oct 25 20:03 UTC │
	│ start   │ -p pause-383163 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-383163              │ jenkins │ v1.37.0 │ 09 Oct 25 20:03 UTC │ 09 Oct 25 20:03 UTC │
	│ pause   │ -p pause-383163 --alsologtostderr -v=5                                                                                                   │ pause-383163              │ jenkins │ v1.37.0 │ 09 Oct 25 20:03 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:03:08
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:03:08.245201  455689 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:03:08.245424  455689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:03:08.245456  455689 out.go:374] Setting ErrFile to fd 2...
	I1009 20:03:08.245477  455689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:03:08.245751  455689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:03:08.246163  455689 out.go:368] Setting JSON to false
	I1009 20:03:08.247158  455689 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9928,"bootTime":1760030261,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:03:08.247258  455689 start.go:143] virtualization:  
	I1009 20:03:08.250788  455689 out.go:179] * [pause-383163] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:03:08.254828  455689 notify.go:221] Checking for updates...
	I1009 20:03:08.258068  455689 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:03:08.261245  455689 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:03:08.264164  455689 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:03:08.267175  455689 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:03:08.270205  455689 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:03:08.273231  455689 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:03:08.276766  455689 config.go:182] Loaded profile config "pause-383163": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:03:08.277381  455689 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:03:08.312157  455689 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:03:08.312344  455689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:03:08.372319  455689 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 20:03:08.36245686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:03:08.372439  455689 docker.go:319] overlay module found
	I1009 20:03:08.377616  455689 out.go:179] * Using the docker driver based on existing profile
	I1009 20:03:08.380300  455689 start.go:309] selected driver: docker
	I1009 20:03:08.380325  455689 start.go:930] validating driver "docker" against &{Name:pause-383163 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-383163 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:03:08.380455  455689 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:03:08.380562  455689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:03:08.453067  455689 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 20:03:08.44360941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:03:08.453556  455689 cni.go:84] Creating CNI manager for ""
	I1009 20:03:08.453627  455689 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:03:08.453679  455689 start.go:353] cluster config:
	{Name:pause-383163 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-383163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:03:08.456735  455689 out.go:179] * Starting "pause-383163" primary control-plane node in "pause-383163" cluster
	I1009 20:03:08.459515  455689 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:03:08.462464  455689 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:03:08.465443  455689 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:03:08.465506  455689 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 20:03:08.465520  455689 cache.go:58] Caching tarball of preloaded images
	I1009 20:03:08.465618  455689 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 20:03:08.465636  455689 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 20:03:08.465790  455689 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/config.json ...
	I1009 20:03:08.466038  455689 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:03:08.491007  455689 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:03:08.491034  455689 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:03:08.491049  455689 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:03:08.491073  455689 start.go:361] acquireMachinesLock for pause-383163: {Name:mk41ce8a74c4d0ecbb9030f4498a10ad28cda730 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:03:08.491134  455689 start.go:365] duration metric: took 39.762µs to acquireMachinesLock for "pause-383163"
	I1009 20:03:08.491158  455689 start.go:97] Skipping create...Using existing machine configuration
	I1009 20:03:08.491164  455689 fix.go:55] fixHost starting: 
	I1009 20:03:08.491440  455689 cli_runner.go:164] Run: docker container inspect pause-383163 --format={{.State.Status}}
	I1009 20:03:08.508803  455689 fix.go:113] recreateIfNeeded on pause-383163: state=Running err=<nil>
	W1009 20:03:08.508843  455689 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 20:03:07.259102  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:07.259543  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:07.259592  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:07.259653  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:07.305811  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:07.305832  439734 cri.go:89] found id: ""
	I1009 20:03:07.305840  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:07.305900  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:07.311192  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:07.311276  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:07.340385  439734 cri.go:89] found id: ""
	I1009 20:03:07.340409  439734 logs.go:282] 0 containers: []
	W1009 20:03:07.340419  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:07.340426  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:07.340487  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:07.370982  439734 cri.go:89] found id: ""
	I1009 20:03:07.371057  439734 logs.go:282] 0 containers: []
	W1009 20:03:07.371083  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:07.371099  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:07.371178  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:07.405196  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:07.405222  439734 cri.go:89] found id: ""
	I1009 20:03:07.405231  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:07.405323  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:07.409511  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:07.409630  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:07.436489  439734 cri.go:89] found id: ""
	I1009 20:03:07.436516  439734 logs.go:282] 0 containers: []
	W1009 20:03:07.436525  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:07.436533  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:07.436594  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:07.464694  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:07.464716  439734 cri.go:89] found id: "7fd5bbac345d9ddc389dd6302ea7d94b7b1cdec1b99f5b507688eff890a69a9f"
	I1009 20:03:07.464733  439734 cri.go:89] found id: ""
	I1009 20:03:07.464742  439734 logs.go:282] 2 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024 7fd5bbac345d9ddc389dd6302ea7d94b7b1cdec1b99f5b507688eff890a69a9f]
	I1009 20:03:07.464812  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:07.468551  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:07.472315  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:07.472439  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:07.509615  439734 cri.go:89] found id: ""
	I1009 20:03:07.509694  439734 logs.go:282] 0 containers: []
	W1009 20:03:07.509718  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:07.509756  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:07.509861  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:07.545695  439734 cri.go:89] found id: ""
	I1009 20:03:07.545717  439734 logs.go:282] 0 containers: []
	W1009 20:03:07.545726  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:07.545750  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:07.545770  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:07.632406  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:07.632437  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:07.632450  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:07.667939  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:07.667970  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:07.736362  439734 logs.go:123] Gathering logs for kube-controller-manager [7fd5bbac345d9ddc389dd6302ea7d94b7b1cdec1b99f5b507688eff890a69a9f] ...
	I1009 20:03:07.736402  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7fd5bbac345d9ddc389dd6302ea7d94b7b1cdec1b99f5b507688eff890a69a9f"
	I1009 20:03:07.766150  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:07.766182  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:03:07.802950  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:07.802982  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:07.919126  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:07.919170  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:07.936344  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:07.936375  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:07.967682  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:07.967708  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:08.512042  455689 out.go:252] * Updating the running docker "pause-383163" container ...
	I1009 20:03:08.512089  455689 machine.go:93] provisionDockerMachine start ...
	I1009 20:03:08.512192  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:08.536291  455689 main.go:141] libmachine: Using SSH client type: native
	I1009 20:03:08.536633  455689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33391 <nil> <nil>}
	I1009 20:03:08.536648  455689 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:03:08.689064  455689 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-383163
	
	I1009 20:03:08.689097  455689 ubuntu.go:182] provisioning hostname "pause-383163"
	I1009 20:03:08.689189  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:08.706929  455689 main.go:141] libmachine: Using SSH client type: native
	I1009 20:03:08.707450  455689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33391 <nil> <nil>}
	I1009 20:03:08.707470  455689 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-383163 && echo "pause-383163" | sudo tee /etc/hostname
	I1009 20:03:08.870436  455689 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-383163
	
	I1009 20:03:08.870514  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:08.889433  455689 main.go:141] libmachine: Using SSH client type: native
	I1009 20:03:08.889754  455689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33391 <nil> <nil>}
	I1009 20:03:08.889777  455689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-383163' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-383163/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-383163' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:03:09.041848  455689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:03:09.041878  455689 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:03:09.041913  455689 ubuntu.go:190] setting up certificates
	I1009 20:03:09.041923  455689 provision.go:84] configureAuth start
	I1009 20:03:09.041987  455689 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-383163
	I1009 20:03:09.061029  455689 provision.go:143] copyHostCerts
	I1009 20:03:09.061265  455689 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:03:09.061283  455689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:03:09.061367  455689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:03:09.061476  455689 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:03:09.061489  455689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:03:09.061520  455689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:03:09.061572  455689 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:03:09.061583  455689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:03:09.061611  455689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:03:09.061664  455689 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.pause-383163 san=[127.0.0.1 192.168.85.2 localhost minikube pause-383163]
	I1009 20:03:09.935554  455689 provision.go:177] copyRemoteCerts
	I1009 20:03:09.935633  455689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:03:09.935674  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:09.953542  455689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33391 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/pause-383163/id_rsa Username:docker}
	I1009 20:03:10.065752  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:03:10.086788  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1009 20:03:10.107309  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:03:10.127136  455689 provision.go:87] duration metric: took 1.085197762s to configureAuth
	I1009 20:03:10.127162  455689 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:03:10.127390  455689 config.go:182] Loaded profile config "pause-383163": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:03:10.127543  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:10.145742  455689 main.go:141] libmachine: Using SSH client type: native
	I1009 20:03:10.146041  455689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33391 <nil> <nil>}
	I1009 20:03:10.146060  455689 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:03:10.528528  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:10.528986  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:10.529041  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:10.529132  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:10.555687  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:10.555711  439734 cri.go:89] found id: ""
	I1009 20:03:10.555720  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:10.555781  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:10.559489  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:10.559573  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:10.585018  439734 cri.go:89] found id: ""
	I1009 20:03:10.585040  439734 logs.go:282] 0 containers: []
	W1009 20:03:10.585048  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:10.585055  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:10.585171  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:10.616457  439734 cri.go:89] found id: ""
	I1009 20:03:10.616483  439734 logs.go:282] 0 containers: []
	W1009 20:03:10.616492  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:10.616499  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:10.616562  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:10.645807  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:10.645833  439734 cri.go:89] found id: ""
	I1009 20:03:10.645842  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:10.645902  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:10.649727  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:10.649798  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:10.680924  439734 cri.go:89] found id: ""
	I1009 20:03:10.680947  439734 logs.go:282] 0 containers: []
	W1009 20:03:10.680956  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:10.680963  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:10.681022  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:10.713037  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:10.713062  439734 cri.go:89] found id: "7fd5bbac345d9ddc389dd6302ea7d94b7b1cdec1b99f5b507688eff890a69a9f"
	I1009 20:03:10.713067  439734 cri.go:89] found id: ""
	I1009 20:03:10.713075  439734 logs.go:282] 2 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024 7fd5bbac345d9ddc389dd6302ea7d94b7b1cdec1b99f5b507688eff890a69a9f]
	I1009 20:03:10.713169  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:10.717175  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:10.720768  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:10.720844  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:10.747632  439734 cri.go:89] found id: ""
	I1009 20:03:10.747656  439734 logs.go:282] 0 containers: []
	W1009 20:03:10.747665  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:10.747672  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:10.747734  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:10.777513  439734 cri.go:89] found id: ""
	I1009 20:03:10.777579  439734 logs.go:282] 0 containers: []
	W1009 20:03:10.777603  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:10.777638  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:10.777673  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:10.893125  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:10.893208  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:10.927850  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:10.927928  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:10.959523  439734 logs.go:123] Gathering logs for kube-controller-manager [7fd5bbac345d9ddc389dd6302ea7d94b7b1cdec1b99f5b507688eff890a69a9f] ...
	I1009 20:03:10.959552  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7fd5bbac345d9ddc389dd6302ea7d94b7b1cdec1b99f5b507688eff890a69a9f"
	I1009 20:03:10.988034  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:10.988063  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:03:11.029070  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:11.029095  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:11.045831  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:11.045858  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:11.124372  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:11.124394  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:11.124412  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:11.194997  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:11.195031  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:13.759016  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:13.759468  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:13.759544  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:13.759622  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:13.787152  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:13.787171  439734 cri.go:89] found id: ""
	I1009 20:03:13.787180  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:13.787238  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:13.791201  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:13.791283  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:13.817976  439734 cri.go:89] found id: ""
	I1009 20:03:13.817998  439734 logs.go:282] 0 containers: []
	W1009 20:03:13.818007  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:13.818013  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:13.818072  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:13.848367  439734 cri.go:89] found id: ""
	I1009 20:03:13.848390  439734 logs.go:282] 0 containers: []
	W1009 20:03:13.848400  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:13.848407  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:13.848468  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:13.875857  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:13.875877  439734 cri.go:89] found id: ""
	I1009 20:03:13.875885  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:13.875943  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:13.879668  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:13.879749  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:13.910409  439734 cri.go:89] found id: ""
	I1009 20:03:13.910438  439734 logs.go:282] 0 containers: []
	W1009 20:03:13.910456  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:13.910464  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:13.910539  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:13.937483  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:13.937511  439734 cri.go:89] found id: ""
	I1009 20:03:13.937519  439734 logs.go:282] 1 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024]
	I1009 20:03:13.937582  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:13.941357  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:13.941439  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:13.966729  439734 cri.go:89] found id: ""
	I1009 20:03:13.966798  439734 logs.go:282] 0 containers: []
	W1009 20:03:13.966814  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:13.966821  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:13.966886  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:13.994594  439734 cri.go:89] found id: ""
	I1009 20:03:13.994621  439734 logs.go:282] 0 containers: []
	W1009 20:03:13.994629  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:13.994639  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:13.994651  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:14.113139  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:14.113175  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:14.131659  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:14.131751  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:14.197906  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:14.197934  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:14.197948  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:14.230749  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:14.230783  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:14.294517  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:14.294552  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:14.319997  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:14.320027  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:14.380396  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:14.380432  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:03:15.509588  455689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:03:15.509610  455689 machine.go:96] duration metric: took 6.997512054s to provisionDockerMachine
	I1009 20:03:15.509621  455689 start.go:294] postStartSetup for "pause-383163" (driver="docker")
	I1009 20:03:15.509632  455689 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:03:15.509698  455689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:03:15.509768  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:15.534469  455689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33391 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/pause-383163/id_rsa Username:docker}
	I1009 20:03:15.641811  455689 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:03:15.645640  455689 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:03:15.645670  455689 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:03:15.645683  455689 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:03:15.645741  455689 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:03:15.645828  455689 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:03:15.645947  455689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:03:15.653866  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:03:15.672631  455689 start.go:297] duration metric: took 162.99285ms for postStartSetup
	I1009 20:03:15.672712  455689 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:03:15.672751  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:15.690391  455689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33391 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/pause-383163/id_rsa Username:docker}
	I1009 20:03:15.790653  455689 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:03:15.795828  455689 fix.go:57] duration metric: took 7.304654598s for fixHost
	I1009 20:03:15.795856  455689 start.go:84] releasing machines lock for "pause-383163", held for 7.3047083s
	I1009 20:03:15.795951  455689 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-383163
	I1009 20:03:15.812914  455689 ssh_runner.go:195] Run: cat /version.json
	I1009 20:03:15.812982  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:15.813308  455689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:03:15.813385  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:15.832355  455689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33391 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/pause-383163/id_rsa Username:docker}
	I1009 20:03:15.835282  455689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33391 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/pause-383163/id_rsa Username:docker}
	I1009 20:03:16.028971  455689 ssh_runner.go:195] Run: systemctl --version
	I1009 20:03:16.039802  455689 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:03:16.080280  455689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:03:16.084943  455689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:03:16.085023  455689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:03:16.094064  455689 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 20:03:16.094090  455689 start.go:496] detecting cgroup driver to use...
	I1009 20:03:16.094123  455689 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 20:03:16.094177  455689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:03:16.110478  455689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:03:16.124097  455689 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:03:16.124207  455689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:03:16.140440  455689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:03:16.154224  455689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:03:16.300492  455689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:03:16.443047  455689 docker.go:234] disabling docker service ...
	I1009 20:03:16.443119  455689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:03:16.459272  455689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:03:16.473010  455689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:03:16.609747  455689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:03:16.755384  455689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:03:16.770269  455689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:03:16.785660  455689 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:03:16.785780  455689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:16.794833  455689 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:03:16.794906  455689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:16.804066  455689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:16.813911  455689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:16.823825  455689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:03:16.832886  455689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:16.842559  455689 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:16.851193  455689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:16.860209  455689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:03:16.867891  455689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:03:16.875257  455689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:03:17.030293  455689 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:03:17.248673  455689 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:03:17.248779  455689 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:03:17.253142  455689 start.go:564] Will wait 60s for crictl version
	I1009 20:03:17.253230  455689 ssh_runner.go:195] Run: which crictl
	I1009 20:03:17.257094  455689 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:03:17.299453  455689 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:03:17.299580  455689 ssh_runner.go:195] Run: crio --version
	I1009 20:03:17.348193  455689 ssh_runner.go:195] Run: crio --version
	I1009 20:03:17.386285  455689 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 20:03:17.389356  455689 cli_runner.go:164] Run: docker network inspect pause-383163 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:03:17.420125  455689 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 20:03:17.424909  455689 kubeadm.go:883] updating cluster {Name:pause-383163 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-383163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:03:17.425064  455689 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:03:17.425249  455689 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:03:17.467653  455689 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:03:17.467680  455689 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:03:17.467739  455689 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:03:17.502352  455689 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:03:17.502379  455689 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:03:17.502390  455689 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1009 20:03:17.502505  455689 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-383163 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-383163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:03:17.502597  455689 ssh_runner.go:195] Run: crio config
	I1009 20:03:17.583996  455689 cni.go:84] Creating CNI manager for ""
	I1009 20:03:17.584021  455689 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:03:17.584039  455689 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 20:03:17.584077  455689 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-383163 NodeName:pause-383163 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:03:17.584225  455689 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-383163"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:03:17.584308  455689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:03:17.593204  455689 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:03:17.593272  455689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:03:17.603503  455689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1009 20:03:17.619729  455689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:03:17.635589  455689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1009 20:03:17.653546  455689 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:03:17.659536  455689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:03:17.829050  455689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:03:17.844490  455689 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163 for IP: 192.168.85.2
	I1009 20:03:17.844514  455689 certs.go:195] generating shared ca certs ...
	I1009 20:03:17.844532  455689 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:03:17.844682  455689 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:03:17.844729  455689 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:03:17.844741  455689 certs.go:257] generating profile certs ...
	I1009 20:03:17.844826  455689 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/client.key
	I1009 20:03:17.844960  455689 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/apiserver.key.9a25b576
	I1009 20:03:17.845009  455689 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/proxy-client.key
	I1009 20:03:17.845216  455689 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:03:17.845253  455689 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:03:17.845262  455689 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:03:17.845294  455689 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:03:17.845327  455689 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:03:17.845350  455689 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:03:17.845395  455689 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:03:17.846000  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:03:17.866561  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:03:17.886121  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:03:17.904350  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:03:17.923310  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 20:03:17.942166  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:03:17.961341  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:03:17.980357  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:03:18.014856  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:03:18.035380  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:03:18.054991  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:03:18.074414  455689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:03:18.088506  455689 ssh_runner.go:195] Run: openssl version
	I1009 20:03:18.095400  455689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:03:18.104323  455689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:03:18.108401  455689 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:03:18.108474  455689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:03:18.149576  455689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:03:18.157705  455689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:03:18.166360  455689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:03:18.170445  455689 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:03:18.170514  455689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:03:18.211684  455689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:03:18.220071  455689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:03:18.228634  455689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:03:18.232358  455689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:03:18.232454  455689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:03:18.273443  455689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:03:18.282129  455689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:03:18.286218  455689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:03:18.327950  455689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:03:18.373923  455689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:03:18.417931  455689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:03:18.470240  455689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:03:18.581945  455689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:03:18.655086  455689 kubeadm.go:400] StartCluster: {Name:pause-383163 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-383163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:03:18.655254  455689 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:03:18.655350  455689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:03:18.754537  455689 cri.go:89] found id: "a167ee63efd53a4275e3e7873bad1603ebe7e8a31f0dd3198756d4b1f148e52a"
	I1009 20:03:18.754609  455689 cri.go:89] found id: "75d7f10be8c3e4bdde6c5890b28343819891d67d2e73eb06f38c47013ae3a3cb"
	I1009 20:03:18.754630  455689 cri.go:89] found id: "4a07552f3446603a46059c12e8713e08b798083b8d17d79c386bb391fc8c893c"
	I1009 20:03:18.754651  455689 cri.go:89] found id: "b9eb2f7f088ee645099c5cd4b8e1f669da2435cd313728f2bfbfc759ff9937b6"
	I1009 20:03:18.754686  455689 cri.go:89] found id: "a3e1d7ac8b25781dbad544fea22784db5fdb0f4de80670ff5f131dc3cc536739"
	I1009 20:03:18.754712  455689 cri.go:89] found id: "8b7b5b8265013e32789ed2351787ae158830229a757a1bc103a4456924b76035"
	I1009 20:03:18.754731  455689 cri.go:89] found id: "8ab6890c2164d0b6bbc82e2679dbd67b5dfe706686726cd94224aaf22c16f80f"
	I1009 20:03:18.754766  455689 cri.go:89] found id: "bb3480137661617835e8f2461eda88ed8e0afcd207648cb4d703a117457533cf"
	I1009 20:03:18.754789  455689 cri.go:89] found id: "5b2ba970850f91ec7dc47036664e17c95915f9f4e974dfe18f12c57f19dc05a3"
	I1009 20:03:18.754814  455689 cri.go:89] found id: "715315fe8199656e0b35e6405a491d6927104742238ac1c9811ad467110e9936"
	I1009 20:03:18.754852  455689 cri.go:89] found id: "f8690925bda20a05089b5b66d446d2a265402cbc16285b44139837240ca69a30"
	I1009 20:03:18.754876  455689 cri.go:89] found id: "5f2a4c1ed909bfd58b69d5787042aa91a6c0d43e3eef176ba2274c648fad521a"
	I1009 20:03:18.754896  455689 cri.go:89] found id: "a58cd421c4789d3b1e15645239af493035656079a6c5a7405c605212d4f12db9"
	I1009 20:03:18.754933  455689 cri.go:89] found id: ""
	I1009 20:03:18.755024  455689 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 20:03:18.781184  455689 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:03:18Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:03:18.781349  455689 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:03:18.801492  455689 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 20:03:18.801562  455689 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 20:03:18.801652  455689 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:03:18.823181  455689 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:03:18.823950  455689 kubeconfig.go:125] found "pause-383163" server: "https://192.168.85.2:8443"
	I1009 20:03:18.824970  455689 kapi.go:59] client config for pause-383163: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/client.key", CAFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 20:03:18.825630  455689 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 20:03:18.825730  455689 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 20:03:18.825766  455689 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 20:03:18.825792  455689 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 20:03:18.825817  455689 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 20:03:18.826251  455689 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:03:18.837940  455689 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1009 20:03:18.837975  455689 kubeadm.go:601] duration metric: took 36.395177ms to restartPrimaryControlPlane
	I1009 20:03:18.837985  455689 kubeadm.go:402] duration metric: took 182.909493ms to StartCluster
	I1009 20:03:18.838000  455689 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:03:18.838076  455689 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:03:18.838997  455689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:03:18.839240  455689 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:03:18.839569  455689 config.go:182] Loaded profile config "pause-383163": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:03:18.839618  455689 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:03:18.843237  455689 out.go:179] * Verifying Kubernetes components...
	I1009 20:03:18.843313  455689 out.go:179] * Enabled addons: 
	I1009 20:03:16.918395  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:16.918787  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:16.918840  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:16.918901  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:16.958626  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:16.958644  439734 cri.go:89] found id: ""
	I1009 20:03:16.958652  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:16.958712  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:16.962685  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:16.962753  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:16.999982  439734 cri.go:89] found id: ""
	I1009 20:03:17.000009  439734 logs.go:282] 0 containers: []
	W1009 20:03:17.000019  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:17.000027  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:17.000100  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:17.041618  439734 cri.go:89] found id: ""
	I1009 20:03:17.041641  439734 logs.go:282] 0 containers: []
	W1009 20:03:17.041736  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:17.041744  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:17.041804  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:17.073489  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:17.073509  439734 cri.go:89] found id: ""
	I1009 20:03:17.073517  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:17.073578  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:17.077501  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:17.077574  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:17.120759  439734 cri.go:89] found id: ""
	I1009 20:03:17.120782  439734 logs.go:282] 0 containers: []
	W1009 20:03:17.120792  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:17.120799  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:17.120897  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:17.153701  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:17.153722  439734 cri.go:89] found id: ""
	I1009 20:03:17.153730  439734 logs.go:282] 1 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024]
	I1009 20:03:17.153791  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:17.158367  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:17.158509  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:17.195790  439734 cri.go:89] found id: ""
	I1009 20:03:17.195865  439734 logs.go:282] 0 containers: []
	W1009 20:03:17.195892  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:17.195919  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:17.196001  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:17.235498  439734 cri.go:89] found id: ""
	I1009 20:03:17.235578  439734 logs.go:282] 0 containers: []
	W1009 20:03:17.235601  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:17.235630  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:17.235664  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:17.288386  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:17.288460  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:17.362194  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:17.362268  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:17.400210  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:17.400235  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:17.470776  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:17.470809  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:03:17.506052  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:17.506083  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:17.651048  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:17.651094  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:17.669703  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:17.669733  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:17.773879  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:18.846346  455689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:03:18.846493  455689 addons.go:514] duration metric: took 6.86701ms for enable addons: enabled=[]
	I1009 20:03:19.095247  455689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:03:19.114131  455689 node_ready.go:35] waiting up to 6m0s for node "pause-383163" to be "Ready" ...
	I1009 20:03:23.206100  455689 node_ready.go:49] node "pause-383163" is "Ready"
	I1009 20:03:23.206131  455689 node_ready.go:38] duration metric: took 4.091971451s for node "pause-383163" to be "Ready" ...
	I1009 20:03:23.206146  455689 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:03:23.206209  455689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:03:23.226115  455689 api_server.go:72] duration metric: took 4.386839053s to wait for apiserver process to appear ...
	I1009 20:03:23.226141  455689 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:03:23.226162  455689 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 20:03:23.236894  455689 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:03:23.236925  455689 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:03:20.274700  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:20.275064  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:20.275104  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:20.275158  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:20.319984  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:20.320003  439734 cri.go:89] found id: ""
	I1009 20:03:20.320013  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:20.320075  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:20.329751  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:20.329827  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:20.372154  439734 cri.go:89] found id: ""
	I1009 20:03:20.372177  439734 logs.go:282] 0 containers: []
	W1009 20:03:20.372186  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:20.372193  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:20.372254  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:20.414446  439734 cri.go:89] found id: ""
	I1009 20:03:20.414468  439734 logs.go:282] 0 containers: []
	W1009 20:03:20.414477  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:20.414483  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:20.414549  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:20.469103  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:20.469169  439734 cri.go:89] found id: ""
	I1009 20:03:20.469177  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:20.469239  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:20.476540  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:20.476621  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:20.528340  439734 cri.go:89] found id: ""
	I1009 20:03:20.528362  439734 logs.go:282] 0 containers: []
	W1009 20:03:20.528371  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:20.528377  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:20.528442  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:20.574587  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:20.574664  439734 cri.go:89] found id: ""
	I1009 20:03:20.574688  439734 logs.go:282] 1 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024]
	I1009 20:03:20.574780  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:20.581646  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:20.581796  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:20.644312  439734 cri.go:89] found id: ""
	I1009 20:03:20.644391  439734 logs.go:282] 0 containers: []
	W1009 20:03:20.644417  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:20.644455  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:20.644542  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:20.692122  439734 cri.go:89] found id: ""
	I1009 20:03:20.692201  439734 logs.go:282] 0 containers: []
	W1009 20:03:20.692225  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:20.692266  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:20.692297  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:20.790226  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:20.790308  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:03:20.837868  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:20.837944  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:20.987972  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:20.988062  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:21.011346  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:21.011372  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:21.148357  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:21.148421  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:21.148451  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:21.204135  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:21.204208  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:21.311602  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:21.311682  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:23.843113  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:23.843528  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:23.843567  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:23.843625  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:23.888654  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:23.888675  439734 cri.go:89] found id: ""
	I1009 20:03:23.888684  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:23.888742  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:23.894563  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:23.894679  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:23.967675  439734 cri.go:89] found id: ""
	I1009 20:03:23.967713  439734 logs.go:282] 0 containers: []
	W1009 20:03:23.967722  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:23.967728  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:23.967803  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:24.016665  439734 cri.go:89] found id: ""
	I1009 20:03:24.016735  439734 logs.go:282] 0 containers: []
	W1009 20:03:24.016764  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:24.016791  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:24.016919  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:24.065029  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:24.065095  439734 cri.go:89] found id: ""
	I1009 20:03:24.065189  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:24.065277  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:24.072497  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:24.072618  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:24.140643  439734 cri.go:89] found id: ""
	I1009 20:03:24.140710  439734 logs.go:282] 0 containers: []
	W1009 20:03:24.140739  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:24.140785  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:24.140877  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:24.185805  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:24.185839  439734 cri.go:89] found id: ""
	I1009 20:03:24.185848  439734 logs.go:282] 1 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024]
	I1009 20:03:24.185915  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:24.190238  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:24.190323  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:24.237902  439734 cri.go:89] found id: ""
	I1009 20:03:24.237976  439734 logs.go:282] 0 containers: []
	W1009 20:03:24.238002  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:24.238040  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:24.238119  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:24.279737  439734 cri.go:89] found id: ""
	I1009 20:03:24.279813  439734 logs.go:282] 0 containers: []
	W1009 20:03:24.279837  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:24.279887  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:24.279931  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:24.307158  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:24.307252  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:24.400052  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:24.400125  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:24.400156  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:24.456710  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:24.456785  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:24.547322  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:24.547404  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:24.594512  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:24.594536  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:24.663315  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:24.663352  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:03:24.699592  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:24.699668  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:23.726364  455689 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 20:03:23.737081  455689 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:03:23.737120  455689 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:03:24.226693  455689 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 20:03:24.236295  455689 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:03:24.236323  455689 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:03:24.726728  455689 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 20:03:24.735617  455689 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 20:03:24.736896  455689 api_server.go:141] control plane version: v1.34.1
	I1009 20:03:24.736927  455689 api_server.go:131] duration metric: took 1.510777921s to wait for apiserver health ...
	I1009 20:03:24.736937  455689 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:03:24.741734  455689 system_pods.go:59] 7 kube-system pods found
	I1009 20:03:24.741779  455689 system_pods.go:61] "coredns-66bc5c9577-kj4l8" [9347b1f1-06ba-4612-96e4-9f5e09ba2500] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:03:24.741791  455689 system_pods.go:61] "etcd-pause-383163" [cfe74798-561d-4053-91e7-db47e37cad9e] Running
	I1009 20:03:24.741802  455689 system_pods.go:61] "kindnet-2blxf" [2bcb5c94-b301-4db9-bcf2-5f6eba8b07c7] Running
	I1009 20:03:24.741815  455689 system_pods.go:61] "kube-apiserver-pause-383163" [5c811666-93d0-42aa-a0c6-151265e26643] Running
	I1009 20:03:24.741824  455689 system_pods.go:61] "kube-controller-manager-pause-383163" [67d8f6ea-5f5c-4aae-8d70-04400ce570be] Running
	I1009 20:03:24.741830  455689 system_pods.go:61] "kube-proxy-9k7j8" [b521ebd5-2359-4c44-9357-f2ac6cdd9719] Running
	I1009 20:03:24.741836  455689 system_pods.go:61] "kube-scheduler-pause-383163" [04e13cb2-6a0b-457c-89b7-e7dbfe30a206] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:03:24.741846  455689 system_pods.go:74] duration metric: took 4.897279ms to wait for pod list to return data ...
	I1009 20:03:24.741855  455689 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:03:24.745025  455689 default_sa.go:45] found service account: "default"
	I1009 20:03:24.745048  455689 default_sa.go:55] duration metric: took 3.183016ms for default service account to be created ...
	I1009 20:03:24.745058  455689 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:03:24.749369  455689 system_pods.go:86] 7 kube-system pods found
	I1009 20:03:24.749413  455689 system_pods.go:89] "coredns-66bc5c9577-kj4l8" [9347b1f1-06ba-4612-96e4-9f5e09ba2500] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:03:24.749423  455689 system_pods.go:89] "etcd-pause-383163" [cfe74798-561d-4053-91e7-db47e37cad9e] Running
	I1009 20:03:24.749435  455689 system_pods.go:89] "kindnet-2blxf" [2bcb5c94-b301-4db9-bcf2-5f6eba8b07c7] Running
	I1009 20:03:24.749440  455689 system_pods.go:89] "kube-apiserver-pause-383163" [5c811666-93d0-42aa-a0c6-151265e26643] Running
	I1009 20:03:24.749444  455689 system_pods.go:89] "kube-controller-manager-pause-383163" [67d8f6ea-5f5c-4aae-8d70-04400ce570be] Running
	I1009 20:03:24.749455  455689 system_pods.go:89] "kube-proxy-9k7j8" [b521ebd5-2359-4c44-9357-f2ac6cdd9719] Running
	I1009 20:03:24.749461  455689 system_pods.go:89] "kube-scheduler-pause-383163" [04e13cb2-6a0b-457c-89b7-e7dbfe30a206] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:03:24.749477  455689 system_pods.go:126] duration metric: took 4.404516ms to wait for k8s-apps to be running ...
	I1009 20:03:24.749490  455689 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:03:24.749556  455689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:03:24.767849  455689 system_svc.go:56] duration metric: took 18.350123ms WaitForService to wait for kubelet
	I1009 20:03:24.767878  455689 kubeadm.go:586] duration metric: took 5.928605905s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:03:24.767897  455689 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:03:24.771873  455689 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:03:24.771907  455689 node_conditions.go:123] node cpu capacity is 2
	I1009 20:03:24.771920  455689 node_conditions.go:105] duration metric: took 4.007794ms to run NodePressure ...
	I1009 20:03:24.771937  455689 start.go:242] waiting for startup goroutines ...
	I1009 20:03:24.771959  455689 start.go:247] waiting for cluster config update ...
	I1009 20:03:24.771968  455689 start.go:256] writing updated cluster config ...
	I1009 20:03:24.773297  455689 ssh_runner.go:195] Run: rm -f paused
	I1009 20:03:24.779762  455689 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:03:24.780608  455689 kapi.go:59] client config for pause-383163: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/client.key", CAFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 20:03:24.784257  455689 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kj4l8" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 20:03:26.789657  455689 pod_ready.go:104] pod "coredns-66bc5c9577-kj4l8" is not "Ready", error: <nil>
	I1009 20:03:27.326059  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:27.326485  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:27.326533  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:27.326593  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:27.354045  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:27.354067  439734 cri.go:89] found id: ""
	I1009 20:03:27.354075  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:27.354134  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:27.357771  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:27.357850  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:27.385231  439734 cri.go:89] found id: ""
	I1009 20:03:27.385255  439734 logs.go:282] 0 containers: []
	W1009 20:03:27.385264  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:27.385270  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:27.385330  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:27.412550  439734 cri.go:89] found id: ""
	I1009 20:03:27.412576  439734 logs.go:282] 0 containers: []
	W1009 20:03:27.412585  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:27.412592  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:27.412650  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:27.440052  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:27.440076  439734 cri.go:89] found id: ""
	I1009 20:03:27.440091  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:27.440153  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:27.443975  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:27.444050  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:27.472498  439734 cri.go:89] found id: ""
	I1009 20:03:27.472522  439734 logs.go:282] 0 containers: []
	W1009 20:03:27.472531  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:27.472544  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:27.472605  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:27.499499  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:27.499535  439734 cri.go:89] found id: ""
	I1009 20:03:27.499544  439734 logs.go:282] 1 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024]
	I1009 20:03:27.499653  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:27.503785  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:27.503905  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:27.532469  439734 cri.go:89] found id: ""
	I1009 20:03:27.532509  439734 logs.go:282] 0 containers: []
	W1009 20:03:27.532519  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:27.532542  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:27.532630  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:27.559782  439734 cri.go:89] found id: ""
	I1009 20:03:27.559808  439734 logs.go:282] 0 containers: []
	W1009 20:03:27.559817  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:27.559826  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:27.559837  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:27.681078  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:27.681119  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:27.698634  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:27.698663  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:27.774623  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:27.774646  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:27.774659  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:27.811575  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:27.811608  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:27.885299  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:27.885344  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:27.915445  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:27.915471  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:27.978009  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:27.978051  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 20:03:28.790130  455689 pod_ready.go:104] pod "coredns-66bc5c9577-kj4l8" is not "Ready", error: <nil>
	I1009 20:03:30.790243  455689 pod_ready.go:94] pod "coredns-66bc5c9577-kj4l8" is "Ready"
	I1009 20:03:30.790265  455689 pod_ready.go:86] duration metric: took 6.005982193s for pod "coredns-66bc5c9577-kj4l8" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:30.793797  455689 pod_ready.go:83] waiting for pod "etcd-pause-383163" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:31.299907  455689 pod_ready.go:94] pod "etcd-pause-383163" is "Ready"
	I1009 20:03:31.299929  455689 pod_ready.go:86] duration metric: took 506.108769ms for pod "etcd-pause-383163" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:31.303568  455689 pod_ready.go:83] waiting for pod "kube-apiserver-pause-383163" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:31.809364  455689 pod_ready.go:94] pod "kube-apiserver-pause-383163" is "Ready"
	I1009 20:03:31.809395  455689 pod_ready.go:86] duration metric: took 505.805473ms for pod "kube-apiserver-pause-383163" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:31.812011  455689 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-383163" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 20:03:33.820288  455689 pod_ready.go:104] pod "kube-controller-manager-pause-383163" is not "Ready", error: <nil>
	I1009 20:03:34.321975  455689 pod_ready.go:94] pod "kube-controller-manager-pause-383163" is "Ready"
	I1009 20:03:34.321998  455689 pod_ready.go:86] duration metric: took 2.509962833s for pod "kube-controller-manager-pause-383163" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:34.326984  455689 pod_ready.go:83] waiting for pod "kube-proxy-9k7j8" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:34.391574  455689 pod_ready.go:94] pod "kube-proxy-9k7j8" is "Ready"
	I1009 20:03:34.391602  455689 pod_ready.go:86] duration metric: took 64.59613ms for pod "kube-proxy-9k7j8" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:34.587930  455689 pod_ready.go:83] waiting for pod "kube-scheduler-pause-383163" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:34.988313  455689 pod_ready.go:94] pod "kube-scheduler-pause-383163" is "Ready"
	I1009 20:03:34.988339  455689 pod_ready.go:86] duration metric: took 400.377959ms for pod "kube-scheduler-pause-383163" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:34.988351  455689 pod_ready.go:40] duration metric: took 10.208558261s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:03:35.058739  455689 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:03:35.062049  455689 out.go:179] * Done! kubectl is now configured to use "pause-383163" cluster and "default" namespace by default
	I1009 20:03:30.516358  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:30.516882  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:30.516933  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:30.516990  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:30.562407  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:30.562427  439734 cri.go:89] found id: ""
	I1009 20:03:30.562435  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:30.562495  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:30.568119  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:30.568192  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:30.612048  439734 cri.go:89] found id: ""
	I1009 20:03:30.612071  439734 logs.go:282] 0 containers: []
	W1009 20:03:30.612079  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:30.612086  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:30.612145  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:30.657850  439734 cri.go:89] found id: ""
	I1009 20:03:30.657879  439734 logs.go:282] 0 containers: []
	W1009 20:03:30.657889  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:30.657896  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:30.657958  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:30.686208  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:30.686300  439734 cri.go:89] found id: ""
	I1009 20:03:30.686326  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:30.686422  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:30.690774  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:30.690877  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:30.718329  439734 cri.go:89] found id: ""
	I1009 20:03:30.718358  439734 logs.go:282] 0 containers: []
	W1009 20:03:30.718367  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:30.718374  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:30.718439  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:30.746795  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:30.746818  439734 cri.go:89] found id: ""
	I1009 20:03:30.746838  439734 logs.go:282] 1 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024]
	I1009 20:03:30.746917  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:30.750792  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:30.750868  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:30.778809  439734 cri.go:89] found id: ""
	I1009 20:03:30.778876  439734 logs.go:282] 0 containers: []
	W1009 20:03:30.778906  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:30.778921  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:30.778996  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:30.810367  439734 cri.go:89] found id: ""
	I1009 20:03:30.810391  439734 logs.go:282] 0 containers: []
	W1009 20:03:30.810399  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:30.810408  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:30.810440  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:30.826971  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:30.827006  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:30.902605  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:30.902627  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:30.902640  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:30.934787  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:30.934816  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:31.010141  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:31.010211  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:31.038135  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:31.038164  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:31.102981  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:31.103026  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:03:31.148932  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:31.148965  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:33.772536  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:33.772945  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:33.772985  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:33.773040  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:33.800859  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:33.800880  439734 cri.go:89] found id: ""
	I1009 20:03:33.800888  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:33.800947  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:33.804697  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:33.804793  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:33.833302  439734 cri.go:89] found id: ""
	I1009 20:03:33.833329  439734 logs.go:282] 0 containers: []
	W1009 20:03:33.833338  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:33.833345  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:33.833452  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:33.859581  439734 cri.go:89] found id: ""
	I1009 20:03:33.859653  439734 logs.go:282] 0 containers: []
	W1009 20:03:33.859677  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:33.859703  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:33.859803  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:33.888506  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:33.888569  439734 cri.go:89] found id: ""
	I1009 20:03:33.888593  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:33.888680  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:33.892751  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:33.892855  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:33.919854  439734 cri.go:89] found id: ""
	I1009 20:03:33.919881  439734 logs.go:282] 0 containers: []
	W1009 20:03:33.919891  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:33.919898  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:33.919961  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:33.946775  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:33.946805  439734 cri.go:89] found id: ""
	I1009 20:03:33.946814  439734 logs.go:282] 1 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024]
	I1009 20:03:33.946871  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:33.950664  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:33.950771  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:33.977950  439734 cri.go:89] found id: ""
	I1009 20:03:33.977974  439734 logs.go:282] 0 containers: []
	W1009 20:03:33.977984  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:33.977992  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:33.978055  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:34.010676  439734 cri.go:89] found id: ""
	I1009 20:03:34.010702  439734 logs.go:282] 0 containers: []
	W1009 20:03:34.010711  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:34.010720  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:34.010732  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:34.039185  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:34.039213  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:34.107412  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:34.107498  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:03:34.140878  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:34.140909  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:34.261115  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:34.261151  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:34.278332  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:34.278361  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:34.348342  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:34.348406  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:34.348435  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:34.402366  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:34.402399  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	
	
	==> CRI-O <==
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.664937161Z" level=info msg="Created container b9eb2f7f088ee645099c5cd4b8e1f669da2435cd313728f2bfbfc759ff9937b6: kube-system/kube-controller-manager-pause-383163/kube-controller-manager" id=9e9aa368-c1a5-482d-bb44-9034b444823f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.704370312Z" level=info msg="Created container 75d7f10be8c3e4bdde6c5890b28343819891d67d2e73eb06f38c47013ae3a3cb: kube-system/kube-scheduler-pause-383163/kube-scheduler" id=8259f2e1-08e3-49bb-85d3-cb7907f7a2c8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.705009103Z" level=info msg="Started container" PID=2287 containerID=4a07552f3446603a46059c12e8713e08b798083b8d17d79c386bb391fc8c893c description=kube-system/kindnet-2blxf/kindnet-cni id=d8d3b118-65f8-4878-a3ea-f6858893b427 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9c46ab11aac900411bcd9c2764ae1a2bebd9f21c37a51c6111dfd5acb0c37cf5
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.711295341Z" level=info msg="Starting container: 75d7f10be8c3e4bdde6c5890b28343819891d67d2e73eb06f38c47013ae3a3cb" id=fc03464a-6152-4779-bc4d-778ddacdbbbb name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.726078478Z" level=info msg="Starting container: b9eb2f7f088ee645099c5cd4b8e1f669da2435cd313728f2bfbfc759ff9937b6" id=7150753c-f53c-4efb-8206-f82c714843c6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.726911567Z" level=info msg="Started container" PID=2306 containerID=75d7f10be8c3e4bdde6c5890b28343819891d67d2e73eb06f38c47013ae3a3cb description=kube-system/kube-scheduler-pause-383163/kube-scheduler id=fc03464a-6152-4779-bc4d-778ddacdbbbb name=/runtime.v1.RuntimeService/StartContainer sandboxID=998e6ac6b4306db15080773b62d2695890106febbe23c40defc5e57810c30474
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.731174339Z" level=info msg="Created container a167ee63efd53a4275e3e7873bad1603ebe7e8a31f0dd3198756d4b1f148e52a: kube-system/kube-proxy-9k7j8/kube-proxy" id=41dfdd28-293d-4200-81e0-d7a35bbafb94 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.73861129Z" level=info msg="Started container" PID=2282 containerID=b9eb2f7f088ee645099c5cd4b8e1f669da2435cd313728f2bfbfc759ff9937b6 description=kube-system/kube-controller-manager-pause-383163/kube-controller-manager id=7150753c-f53c-4efb-8206-f82c714843c6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=003237e08d9c2c14de2734fd9c7353dd50598d8846315860434b3b53f542920b
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.746865946Z" level=info msg="Starting container: a167ee63efd53a4275e3e7873bad1603ebe7e8a31f0dd3198756d4b1f148e52a" id=6f8cc343-1e16-4d68-99d0-46a94b7de884 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.763967288Z" level=info msg="Created container cb4576736bbda7b31792b910061f810e74ebfe1099b49efb9c81dfdd2a1f445b: kube-system/kube-apiserver-pause-383163/kube-apiserver" id=d23689cb-aa19-4a6d-9844-f9469aecc5fb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.76467264Z" level=info msg="Starting container: cb4576736bbda7b31792b910061f810e74ebfe1099b49efb9c81dfdd2a1f445b" id=19c3c930-3571-4765-8e49-8de55b56e7ea name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.76587978Z" level=info msg="Started container" PID=2323 containerID=a167ee63efd53a4275e3e7873bad1603ebe7e8a31f0dd3198756d4b1f148e52a description=kube-system/kube-proxy-9k7j8/kube-proxy id=6f8cc343-1e16-4d68-99d0-46a94b7de884 name=/runtime.v1.RuntimeService/StartContainer sandboxID=955f14dcda9c8b6356d97c9f7ef3b7a84278561f49b1b2d33bccb16a0859e766
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.768253026Z" level=info msg="Started container" PID=2303 containerID=cb4576736bbda7b31792b910061f810e74ebfe1099b49efb9c81dfdd2a1f445b description=kube-system/kube-apiserver-pause-383163/kube-apiserver id=19c3c930-3571-4765-8e49-8de55b56e7ea name=/runtime.v1.RuntimeService/StartContainer sandboxID=6abb6a0f31d585118179d190cdb532c3755dd34e4076ff29965d6fb14b07b7d8
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.977245329Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.981549947Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.9816122Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.981638153Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.984952108Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.984990607Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.985014361Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.9894617Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.989497819Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.989522222Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.992612479Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.99264845Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a167ee63efd53       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   19 seconds ago       Running             kube-proxy                1                   955f14dcda9c8       kube-proxy-9k7j8                       kube-system
	cb4576736bbda       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   19 seconds ago       Running             kube-apiserver            1                   6abb6a0f31d58       kube-apiserver-pause-383163            kube-system
	75d7f10be8c3e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   19 seconds ago       Running             kube-scheduler            1                   998e6ac6b4306       kube-scheduler-pause-383163            kube-system
	4a07552f34466       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   19 seconds ago       Running             kindnet-cni               1                   9c46ab11aac90       kindnet-2blxf                          kube-system
	b9eb2f7f088ee       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   19 seconds ago       Running             kube-controller-manager   1                   003237e08d9c2       kube-controller-manager-pause-383163   kube-system
	a3e1d7ac8b257       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   19 seconds ago       Running             etcd                      1                   9b1a12a666139       etcd-pause-383163                      kube-system
	8b7b5b8265013       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   19 seconds ago       Running             coredns                   1                   c339fddc9b980       coredns-66bc5c9577-kj4l8               kube-system
	8ab6890c2164d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   32 seconds ago       Exited              coredns                   0                   c339fddc9b980       coredns-66bc5c9577-kj4l8               kube-system
	bb34801376616       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   955f14dcda9c8       kube-proxy-9k7j8                       kube-system
	5b2ba970850f9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   9c46ab11aac90       kindnet-2blxf                          kube-system
	715315fe81996       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   6abb6a0f31d58       kube-apiserver-pause-383163            kube-system
	f8690925bda20       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   003237e08d9c2       kube-controller-manager-pause-383163   kube-system
	5f2a4c1ed909b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   998e6ac6b4306       kube-scheduler-pause-383163            kube-system
	a58cd421c4789       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   9b1a12a666139       etcd-pause-383163                      kube-system
	
	
	==> coredns [8ab6890c2164d0b6bbc82e2679dbd67b5dfe706686726cd94224aaf22c16f80f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38501 - 49567 "HINFO IN 337892150647174265.2964964437038389437. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011932299s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8b7b5b8265013e32789ed2351787ae158830229a757a1bc103a4456924b76035] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52404 - 26900 "HINFO IN 3764607938427269200.9140827235858688719. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022767923s
	
	
	==> describe nodes <==
	Name:               pause-383163
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-383163
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=pause-383163
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T20_02_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 20:02:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-383163
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 20:03:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 20:03:05 +0000   Thu, 09 Oct 2025 20:02:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 20:03:05 +0000   Thu, 09 Oct 2025 20:02:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 20:03:05 +0000   Thu, 09 Oct 2025 20:02:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 20:03:05 +0000   Thu, 09 Oct 2025 20:03:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-383163
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a885f66b42d4c388ad3d29291a058dd
	  System UUID:                4da07121-3df8-4e47-9e7a-f63fa3550e7e
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-kj4l8                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     74s
	  kube-system                 etcd-pause-383163                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         79s
	  kube-system                 kindnet-2blxf                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      75s
	  kube-system                 kube-apiserver-pause-383163             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-pause-383163    200m (10%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-proxy-9k7j8                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-scheduler-pause-383163             100m (5%)     0 (0%)      0 (0%)           0 (0%)         80s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 73s   kube-proxy       
	  Normal   Starting                 14s   kube-proxy       
	  Normal   Starting                 79s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 79s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  79s   kubelet          Node pause-383163 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    79s   kubelet          Node pause-383163 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     79s   kubelet          Node pause-383163 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           75s   node-controller  Node pause-383163 event: Registered Node pause-383163 in Controller
	  Normal   NodeReady                33s   kubelet          Node pause-383163 status is now: NodeReady
	  Normal   RegisteredNode           12s   node-controller  Node pause-383163 event: Registered Node pause-383163 in Controller
	
	
	==> dmesg <==
	[  +3.297009] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:28] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	[  +4.492991] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:38] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:40] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:45] overlayfs: idmapped layers are currently not supported
	[ +36.012100] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:47] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:50] overlayfs: idmapped layers are currently not supported
	[ +27.967875] overlayfs: idmapped layers are currently not supported
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a3e1d7ac8b25781dbad544fea22784db5fdb0f4de80670ff5f131dc3cc536739] <==
	{"level":"warn","ts":"2025-10-09T20:03:21.375735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.391609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.434365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.461540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.480750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.501478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.515755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.539464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.556936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.575518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.642229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.677529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.715546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.757436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.775602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.803818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.847522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.872653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.913879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.919200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.939551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.963817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.986093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.999664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:22.120836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	
	
	==> etcd [a58cd421c4789d3b1e15645239af493035656079a6c5a7405c605212d4f12db9] <==
	{"level":"warn","ts":"2025-10-09T20:02:15.230802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:02:15.264631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:02:15.298613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:02:15.324680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:02:15.344005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:02:15.360557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:02:15.458828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51788","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-09T20:03:10.323359Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-09T20:03:10.323409Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-383163","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-09T20:03:10.323502Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-09T20:03:10.472425Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-09T20:03:10.472527Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-09T20:03:10.472551Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-09T20:03:10.472663Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-09T20:03:10.472685Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-09T20:03:10.472748Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-09T20:03:10.472826Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-09T20:03:10.472863Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-09T20:03:10.472962Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-09T20:03:10.472982Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-09T20:03:10.472992Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-09T20:03:10.475929Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-09T20:03:10.476027Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-09T20:03:10.476065Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-09T20:03:10.476074Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-383163","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 20:03:38 up  2:45,  0 user,  load average: 3.02, 2.56, 2.13
	Linux pause-383163 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4a07552f3446603a46059c12e8713e08b798083b8d17d79c386bb391fc8c893c] <==
	I1009 20:03:18.712390       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:03:18.716840       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 20:03:18.717036       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:03:18.717050       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:03:18.717070       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:03:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:03:18.976501       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:03:18.976536       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:03:18.976545       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:03:18.977358       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 20:03:23.331133       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1009 20:03:24.276875       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 20:03:24.276990       1 metrics.go:72] Registering metrics
	I1009 20:03:24.277093       1 controller.go:711] "Syncing nftables rules"
	I1009 20:03:28.976733       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 20:03:28.976795       1 main.go:301] handling current node
	
	
	==> kindnet [5b2ba970850f91ec7dc47036664e17c95915f9f4e974dfe18f12c57f19dc05a3] <==
	I1009 20:02:24.905278       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:02:24.905674       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 20:02:24.905828       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:02:24.905869       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:02:24.905909       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:02:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:02:25.105600       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:02:25.105680       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:02:25.105895       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:02:25.106558       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 20:02:55.106893       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 20:02:55.107177       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 20:02:55.107308       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1009 20:02:55.107439       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1009 20:02:56.607615       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 20:02:56.607664       1 metrics.go:72] Registering metrics
	I1009 20:02:56.607734       1 controller.go:711] "Syncing nftables rules"
	I1009 20:03:05.105611       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 20:03:05.105680       1 main.go:301] handling current node
	
	
	==> kube-apiserver [715315fe8199656e0b35e6405a491d6927104742238ac1c9811ad467110e9936] <==
	W1009 20:03:10.331024       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.331074       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.331122       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.331175       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.331220       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.331301       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.331350       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.331472       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.331625       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332413       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332468       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332517       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332558       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332594       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332636       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332679       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332720       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332773       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332811       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332849       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332904       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332945       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332991       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.343103       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [cb4576736bbda7b31792b910061f810e74ebfe1099b49efb9c81dfdd2a1f445b] <==
	I1009 20:03:23.357373       1 policy_source.go:240] refreshing policies
	I1009 20:03:23.375794       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1009 20:03:23.375870       1 aggregator.go:171] initial CRD sync complete...
	I1009 20:03:23.375884       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 20:03:23.375893       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 20:03:23.375907       1 cache.go:39] Caches are synced for autoregister controller
	I1009 20:03:23.379870       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 20:03:23.387857       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 20:03:23.388142       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 20:03:23.388160       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 20:03:23.389412       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1009 20:03:23.389644       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1009 20:03:23.389697       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 20:03:23.394171       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 20:03:23.394479       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1009 20:03:23.394572       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 20:03:23.399383       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 20:03:23.399505       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1009 20:03:23.405664       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 20:03:23.903571       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:03:25.198009       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 20:03:26.642498       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 20:03:26.841005       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 20:03:26.893559       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 20:03:26.994182       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [b9eb2f7f088ee645099c5cd4b8e1f669da2435cd313728f2bfbfc759ff9937b6] <==
	I1009 20:03:26.590516       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1009 20:03:26.591668       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 20:03:26.593926       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1009 20:03:26.598325       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:03:26.598350       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 20:03:26.598357       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 20:03:26.603186       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 20:03:26.604158       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 20:03:26.618065       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 20:03:26.626650       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1009 20:03:26.626713       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 20:03:26.626750       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 20:03:26.626766       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 20:03:26.626773       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 20:03:26.629165       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 20:03:26.633728       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 20:03:26.633795       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 20:03:26.639182       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1009 20:03:26.639235       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 20:03:26.639414       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 20:03:26.639639       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 20:03:26.639951       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1009 20:03:26.645144       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 20:03:26.653769       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 20:03:26.656175       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [f8690925bda20a05089b5b66d446d2a265402cbc16285b44139837240ca69a30] <==
	I1009 20:02:23.331754       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1009 20:02:23.341479       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 20:02:23.341578       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1009 20:02:23.347899       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:02:23.349087       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 20:02:23.358497       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1009 20:02:23.358575       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 20:02:23.358611       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 20:02:23.358635       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 20:02:23.358641       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 20:02:23.365169       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 20:02:23.374069       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 20:02:23.378772       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 20:02:23.379257       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 20:02:23.379403       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 20:02:23.379496       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 20:02:23.379590       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1009 20:02:23.381861       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 20:02:23.384371       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1009 20:02:23.384490       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 20:02:23.384620       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1009 20:02:23.390607       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 20:02:23.392980       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1009 20:02:23.405942       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-383163" podCIDRs=["10.244.0.0/24"]
	I1009 20:03:08.338175       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a167ee63efd53a4275e3e7873bad1603ebe7e8a31f0dd3198756d4b1f148e52a] <==
	I1009 20:03:21.890582       1 server_linux.go:53] "Using iptables proxy"
	I1009 20:03:22.467785       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 20:03:23.368537       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 20:03:23.368701       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 20:03:23.368816       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:03:23.456158       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:03:23.456222       1 server_linux.go:132] "Using iptables Proxier"
	I1009 20:03:23.491801       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:03:23.492352       1 server.go:527] "Version info" version="v1.34.1"
	I1009 20:03:23.492386       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:03:23.493975       1 config.go:106] "Starting endpoint slice config controller"
	I1009 20:03:23.494061       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 20:03:23.494403       1 config.go:200] "Starting service config controller"
	I1009 20:03:23.494466       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 20:03:23.494844       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 20:03:23.494901       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 20:03:23.495488       1 config.go:309] "Starting node config controller"
	I1009 20:03:23.495570       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 20:03:23.495601       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 20:03:23.595328       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 20:03:23.595367       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1009 20:03:23.595410       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [bb3480137661617835e8f2461eda88ed8e0afcd207648cb4d703a117457533cf] <==
	I1009 20:02:24.920527       1 server_linux.go:53] "Using iptables proxy"
	I1009 20:02:25.019031       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 20:02:25.119796       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 20:02:25.119833       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 20:02:25.119923       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:02:25.205068       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:02:25.205221       1 server_linux.go:132] "Using iptables Proxier"
	I1009 20:02:25.209897       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:02:25.210317       1 server.go:527] "Version info" version="v1.34.1"
	I1009 20:02:25.210525       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:02:25.212038       1 config.go:200] "Starting service config controller"
	I1009 20:02:25.212059       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 20:02:25.212077       1 config.go:106] "Starting endpoint slice config controller"
	I1009 20:02:25.212082       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 20:02:25.212109       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 20:02:25.212120       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 20:02:25.212761       1 config.go:309] "Starting node config controller"
	I1009 20:02:25.212780       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 20:02:25.212787       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 20:02:25.313801       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 20:02:25.313835       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 20:02:25.313879       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5f2a4c1ed909bfd58b69d5787042aa91a6c0d43e3eef176ba2274c648fad521a] <==
	E1009 20:02:16.545958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 20:02:16.546011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 20:02:16.546083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 20:02:16.546159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 20:02:16.546219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 20:02:16.546280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 20:02:16.546345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1009 20:02:16.546402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 20:02:16.546456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 20:02:16.546534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 20:02:16.546761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 20:02:16.546883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1009 20:02:16.546925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 20:02:16.546940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1009 20:02:17.419458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 20:02:17.446882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 20:02:17.505233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 20:02:17.532989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1009 20:02:18.084259       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:03:10.329864       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1009 20:03:10.329893       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1009 20:03:10.329914       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1009 20:03:10.329952       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:03:10.330109       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1009 20:03:10.330138       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [75d7f10be8c3e4bdde6c5890b28343819891d67d2e73eb06f38c47013ae3a3cb] <==
	I1009 20:03:22.091775       1 serving.go:386] Generated self-signed cert in-memory
	I1009 20:03:24.484580       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 20:03:24.484617       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:03:24.494684       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 20:03:24.494743       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 20:03:24.494800       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:03:24.494807       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:03:24.494841       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:03:24.494868       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:03:24.496192       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 20:03:24.496451       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 20:03:24.595699       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:03:24.595836       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 20:03:24.595940       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 20:03:18 pause-383163 kubelet[1291]: E1009 20:03:18.476314    1291 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-383163\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3e244fd3a5e69bdedf4fc7a419241dd5" pod="kube-system/kube-controller-manager-pause-383163"
	Oct 09 20:03:18 pause-383163 kubelet[1291]: E1009 20:03:18.476806    1291 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-2blxf\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2bcb5c94-b301-4db9-bcf2-5f6eba8b07c7" pod="kube-system/kindnet-2blxf"
	Oct 09 20:03:18 pause-383163 kubelet[1291]: E1009 20:03:18.477087    1291 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9k7j8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b521ebd5-2359-4c44-9357-f2ac6cdd9719" pod="kube-system/kube-proxy-9k7j8"
	Oct 09 20:03:18 pause-383163 kubelet[1291]: E1009 20:03:18.477343    1291 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-kj4l8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9347b1f1-06ba-4612-96e4-9f5e09ba2500" pod="kube-system/coredns-66bc5c9577-kj4l8"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.118300    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-383163\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="3dc83ccd270fb312848e6bb9a10a204a" pod="kube-system/kube-scheduler-pause-383163"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.119171    1291 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-383163\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.119427    1291 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-383163\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.278734    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-383163\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="3dcaaa6838efe81036d876fec785ce3f" pod="kube-system/kube-apiserver-pause-383163"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.294100    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-383163\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="739ecb3823ee6112ca137b686c87fc3b" pod="kube-system/etcd-pause-383163"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.296475    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-383163\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="3e244fd3a5e69bdedf4fc7a419241dd5" pod="kube-system/kube-controller-manager-pause-383163"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.299397    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-2blxf\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="2bcb5c94-b301-4db9-bcf2-5f6eba8b07c7" pod="kube-system/kindnet-2blxf"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.302495    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-9k7j8\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="b521ebd5-2359-4c44-9357-f2ac6cdd9719" pod="kube-system/kube-proxy-9k7j8"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.310269    1291 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 09 20:03:23 pause-383163 kubelet[1291]:         pods "coredns-66bc5c9577-kj4l8" is forbidden: User "system:node:pause-383163" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-383163' and this object
	Oct 09 20:03:23 pause-383163 kubelet[1291]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Oct 09 20:03:23 pause-383163 kubelet[1291]:  > podUID="9347b1f1-06ba-4612-96e4-9f5e09ba2500" pod="kube-system/coredns-66bc5c9577-kj4l8"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.318606    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-383163\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="3dcaaa6838efe81036d876fec785ce3f" pod="kube-system/kube-apiserver-pause-383163"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.319890    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-383163\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="739ecb3823ee6112ca137b686c87fc3b" pod="kube-system/etcd-pause-383163"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.323034    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-383163\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="3e244fd3a5e69bdedf4fc7a419241dd5" pod="kube-system/kube-controller-manager-pause-383163"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.324168    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-2blxf\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="2bcb5c94-b301-4db9-bcf2-5f6eba8b07c7" pod="kube-system/kindnet-2blxf"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.326198    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-9k7j8\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="b521ebd5-2359-4c44-9357-f2ac6cdd9719" pod="kube-system/kube-proxy-9k7j8"
	Oct 09 20:03:29 pause-383163 kubelet[1291]: W1009 20:03:29.486903    1291 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 09 20:03:35 pause-383163 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 20:03:35 pause-383163 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 20:03:35 pause-383163 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-383163 -n pause-383163
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-383163 -n pause-383163: exit status 2 (370.500766ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-383163 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-383163
helpers_test.go:243: (dbg) docker inspect pause-383163:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b864bacdbf740035af5526f14a34657026d770c3700f8ba7f8bb641f5902864",
	        "Created": "2025-10-09T20:01:53.600639875Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 451656,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:01:53.679652006Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/5b864bacdbf740035af5526f14a34657026d770c3700f8ba7f8bb641f5902864/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b864bacdbf740035af5526f14a34657026d770c3700f8ba7f8bb641f5902864/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b864bacdbf740035af5526f14a34657026d770c3700f8ba7f8bb641f5902864/hosts",
	        "LogPath": "/var/lib/docker/containers/5b864bacdbf740035af5526f14a34657026d770c3700f8ba7f8bb641f5902864/5b864bacdbf740035af5526f14a34657026d770c3700f8ba7f8bb641f5902864-json.log",
	        "Name": "/pause-383163",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-383163:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-383163",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5b864bacdbf740035af5526f14a34657026d770c3700f8ba7f8bb641f5902864",
	                "LowerDir": "/var/lib/docker/overlay2/26d4f9b68a6ffc726dd4e3fe961e65b60a4439463009444074acbaa166b8fae8-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/26d4f9b68a6ffc726dd4e3fe961e65b60a4439463009444074acbaa166b8fae8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/26d4f9b68a6ffc726dd4e3fe961e65b60a4439463009444074acbaa166b8fae8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/26d4f9b68a6ffc726dd4e3fe961e65b60a4439463009444074acbaa166b8fae8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-383163",
	                "Source": "/var/lib/docker/volumes/pause-383163/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-383163",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-383163",
	                "name.minikube.sigs.k8s.io": "pause-383163",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb5ce84d03467f28a9daf68107a371dc28f5dc96c9a5c184625f5a0d3eac44e8",
	            "SandboxKey": "/var/run/docker/netns/eb5ce84d0346",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-383163": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:88:3b:e9:01:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ed7ad6a5b4e1cfaa878c0b11825ac42d5f0a339e60392d3e4a9c05ec240e619a",
	                    "EndpointID": "f4f17eaa70d22135d3163a12c969128ec19389180d914fdac7f64a3ca204f037",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-383163",
	                        "5b864bacdbf7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-383163 -n pause-383163
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-383163 -n pause-383163: exit status 2 (348.048332ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-383163 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-383163 logs -n 25: (1.512833847s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-965213 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:57 UTC │ 09 Oct 25 19:57 UTC │
	│ start   │ -p missing-upgrade-917803 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-917803    │ jenkins │ v1.32.0 │ 09 Oct 25 19:57 UTC │ 09 Oct 25 19:58 UTC │
	│ start   │ -p NoKubernetes-965213 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:57 UTC │ 09 Oct 25 19:58 UTC │
	│ delete  │ -p NoKubernetes-965213                                                                                                                   │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │ 09 Oct 25 19:58 UTC │
	│ start   │ -p NoKubernetes-965213 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │ 09 Oct 25 19:58 UTC │
	│ ssh     │ -p NoKubernetes-965213 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │                     │
	│ stop    │ -p NoKubernetes-965213                                                                                                                   │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │ 09 Oct 25 19:58 UTC │
	│ start   │ -p NoKubernetes-965213 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │ 09 Oct 25 19:58 UTC │
	│ ssh     │ -p NoKubernetes-965213 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │                     │
	│ delete  │ -p NoKubernetes-965213                                                                                                                   │ NoKubernetes-965213       │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │ 09 Oct 25 19:58 UTC │
	│ start   │ -p kubernetes-upgrade-164946 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-164946 │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │ 09 Oct 25 19:59 UTC │
	│ start   │ -p missing-upgrade-917803 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-917803    │ jenkins │ v1.37.0 │ 09 Oct 25 19:58 UTC │ 09 Oct 25 19:59 UTC │
	│ stop    │ -p kubernetes-upgrade-164946                                                                                                             │ kubernetes-upgrade-164946 │ jenkins │ v1.37.0 │ 09 Oct 25 19:59 UTC │ 09 Oct 25 19:59 UTC │
	│ start   │ -p kubernetes-upgrade-164946 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-164946 │ jenkins │ v1.37.0 │ 09 Oct 25 19:59 UTC │                     │
	│ delete  │ -p missing-upgrade-917803                                                                                                                │ missing-upgrade-917803    │ jenkins │ v1.37.0 │ 09 Oct 25 19:59 UTC │ 09 Oct 25 19:59 UTC │
	│ start   │ -p stopped-upgrade-265052 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-265052    │ jenkins │ v1.32.0 │ 09 Oct 25 19:59 UTC │ 09 Oct 25 20:00 UTC │
	│ stop    │ stopped-upgrade-265052 stop                                                                                                              │ stopped-upgrade-265052    │ jenkins │ v1.32.0 │ 09 Oct 25 20:00 UTC │ 09 Oct 25 20:00 UTC │
	│ start   │ -p stopped-upgrade-265052 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-265052    │ jenkins │ v1.37.0 │ 09 Oct 25 20:00 UTC │ 09 Oct 25 20:00 UTC │
	│ delete  │ -p stopped-upgrade-265052                                                                                                                │ stopped-upgrade-265052    │ jenkins │ v1.37.0 │ 09 Oct 25 20:00 UTC │ 09 Oct 25 20:00 UTC │
	│ start   │ -p running-upgrade-055303 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-055303    │ jenkins │ v1.32.0 │ 09 Oct 25 20:00 UTC │ 09 Oct 25 20:01 UTC │
	│ start   │ -p running-upgrade-055303 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-055303    │ jenkins │ v1.37.0 │ 09 Oct 25 20:01 UTC │ 09 Oct 25 20:01 UTC │
	│ delete  │ -p running-upgrade-055303                                                                                                                │ running-upgrade-055303    │ jenkins │ v1.37.0 │ 09 Oct 25 20:01 UTC │ 09 Oct 25 20:01 UTC │
	│ start   │ -p pause-383163 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-383163              │ jenkins │ v1.37.0 │ 09 Oct 25 20:01 UTC │ 09 Oct 25 20:03 UTC │
	│ start   │ -p pause-383163 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-383163              │ jenkins │ v1.37.0 │ 09 Oct 25 20:03 UTC │ 09 Oct 25 20:03 UTC │
	│ pause   │ -p pause-383163 --alsologtostderr -v=5                                                                                                   │ pause-383163              │ jenkins │ v1.37.0 │ 09 Oct 25 20:03 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:03:08
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:03:08.245201  455689 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:03:08.245424  455689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:03:08.245456  455689 out.go:374] Setting ErrFile to fd 2...
	I1009 20:03:08.245477  455689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:03:08.245751  455689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:03:08.246163  455689 out.go:368] Setting JSON to false
	I1009 20:03:08.247158  455689 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9928,"bootTime":1760030261,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:03:08.247258  455689 start.go:143] virtualization:  
	I1009 20:03:08.250788  455689 out.go:179] * [pause-383163] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:03:08.254828  455689 notify.go:221] Checking for updates...
	I1009 20:03:08.258068  455689 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:03:08.261245  455689 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:03:08.264164  455689 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:03:08.267175  455689 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:03:08.270205  455689 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:03:08.273231  455689 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:03:08.276766  455689 config.go:182] Loaded profile config "pause-383163": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:03:08.277381  455689 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:03:08.312157  455689 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:03:08.312344  455689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:03:08.372319  455689 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 20:03:08.36245686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:03:08.372439  455689 docker.go:319] overlay module found
	I1009 20:03:08.377616  455689 out.go:179] * Using the docker driver based on existing profile
	I1009 20:03:08.380300  455689 start.go:309] selected driver: docker
	I1009 20:03:08.380325  455689 start.go:930] validating driver "docker" against &{Name:pause-383163 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-383163 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:03:08.380455  455689 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:03:08.380562  455689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:03:08.453067  455689 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 20:03:08.44360941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:03:08.453556  455689 cni.go:84] Creating CNI manager for ""
	I1009 20:03:08.453627  455689 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:03:08.453679  455689 start.go:353] cluster config:
	{Name:pause-383163 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-383163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:03:08.456735  455689 out.go:179] * Starting "pause-383163" primary control-plane node in "pause-383163" cluster
	I1009 20:03:08.459515  455689 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:03:08.462464  455689 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:03:08.465443  455689 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:03:08.465506  455689 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 20:03:08.465520  455689 cache.go:58] Caching tarball of preloaded images
	I1009 20:03:08.465618  455689 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 20:03:08.465636  455689 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 20:03:08.465790  455689 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/config.json ...
	I1009 20:03:08.466038  455689 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:03:08.491007  455689 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:03:08.491034  455689 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:03:08.491049  455689 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:03:08.491073  455689 start.go:361] acquireMachinesLock for pause-383163: {Name:mk41ce8a74c4d0ecbb9030f4498a10ad28cda730 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:03:08.491134  455689 start.go:365] duration metric: took 39.762µs to acquireMachinesLock for "pause-383163"
	I1009 20:03:08.491158  455689 start.go:97] Skipping create...Using existing machine configuration
	I1009 20:03:08.491164  455689 fix.go:55] fixHost starting: 
	I1009 20:03:08.491440  455689 cli_runner.go:164] Run: docker container inspect pause-383163 --format={{.State.Status}}
	I1009 20:03:08.508803  455689 fix.go:113] recreateIfNeeded on pause-383163: state=Running err=<nil>
	W1009 20:03:08.508843  455689 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 20:03:07.259102  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:07.259543  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:07.259592  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:07.259653  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:07.305811  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:07.305832  439734 cri.go:89] found id: ""
	I1009 20:03:07.305840  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:07.305900  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:07.311192  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:07.311276  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:07.340385  439734 cri.go:89] found id: ""
	I1009 20:03:07.340409  439734 logs.go:282] 0 containers: []
	W1009 20:03:07.340419  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:07.340426  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:07.340487  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:07.370982  439734 cri.go:89] found id: ""
	I1009 20:03:07.371057  439734 logs.go:282] 0 containers: []
	W1009 20:03:07.371083  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:07.371099  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:07.371178  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:07.405196  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:07.405222  439734 cri.go:89] found id: ""
	I1009 20:03:07.405231  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:07.405323  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:07.409511  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:07.409630  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:07.436489  439734 cri.go:89] found id: ""
	I1009 20:03:07.436516  439734 logs.go:282] 0 containers: []
	W1009 20:03:07.436525  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:07.436533  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:07.436594  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:07.464694  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:07.464716  439734 cri.go:89] found id: "7fd5bbac345d9ddc389dd6302ea7d94b7b1cdec1b99f5b507688eff890a69a9f"
	I1009 20:03:07.464733  439734 cri.go:89] found id: ""
	I1009 20:03:07.464742  439734 logs.go:282] 2 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024 7fd5bbac345d9ddc389dd6302ea7d94b7b1cdec1b99f5b507688eff890a69a9f]
	I1009 20:03:07.464812  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:07.468551  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:07.472315  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:07.472439  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:07.509615  439734 cri.go:89] found id: ""
	I1009 20:03:07.509694  439734 logs.go:282] 0 containers: []
	W1009 20:03:07.509718  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:07.509756  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:07.509861  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:07.545695  439734 cri.go:89] found id: ""
	I1009 20:03:07.545717  439734 logs.go:282] 0 containers: []
	W1009 20:03:07.545726  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:07.545750  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:07.545770  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:07.632406  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:07.632437  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:07.632450  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:07.667939  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:07.667970  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:07.736362  439734 logs.go:123] Gathering logs for kube-controller-manager [7fd5bbac345d9ddc389dd6302ea7d94b7b1cdec1b99f5b507688eff890a69a9f] ...
	I1009 20:03:07.736402  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7fd5bbac345d9ddc389dd6302ea7d94b7b1cdec1b99f5b507688eff890a69a9f"
	I1009 20:03:07.766150  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:07.766182  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:03:07.802950  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:07.802982  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:07.919126  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:07.919170  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:07.936344  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:07.936375  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:07.967682  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:07.967708  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:08.512042  455689 out.go:252] * Updating the running docker "pause-383163" container ...
	I1009 20:03:08.512089  455689 machine.go:93] provisionDockerMachine start ...
	I1009 20:03:08.512192  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:08.536291  455689 main.go:141] libmachine: Using SSH client type: native
	I1009 20:03:08.536633  455689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33391 <nil> <nil>}
	I1009 20:03:08.536648  455689 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:03:08.689064  455689 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-383163
	
	I1009 20:03:08.689097  455689 ubuntu.go:182] provisioning hostname "pause-383163"
	I1009 20:03:08.689189  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:08.706929  455689 main.go:141] libmachine: Using SSH client type: native
	I1009 20:03:08.707450  455689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33391 <nil> <nil>}
	I1009 20:03:08.707470  455689 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-383163 && echo "pause-383163" | sudo tee /etc/hostname
	I1009 20:03:08.870436  455689 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-383163
	
	I1009 20:03:08.870514  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:08.889433  455689 main.go:141] libmachine: Using SSH client type: native
	I1009 20:03:08.889754  455689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33391 <nil> <nil>}
	I1009 20:03:08.889777  455689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-383163' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-383163/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-383163' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:03:09.041848  455689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:03:09.041878  455689 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:03:09.041913  455689 ubuntu.go:190] setting up certificates
	I1009 20:03:09.041923  455689 provision.go:84] configureAuth start
	I1009 20:03:09.041987  455689 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-383163
	I1009 20:03:09.061029  455689 provision.go:143] copyHostCerts
	I1009 20:03:09.061265  455689 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:03:09.061283  455689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:03:09.061367  455689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:03:09.061476  455689 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:03:09.061489  455689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:03:09.061520  455689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:03:09.061572  455689 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:03:09.061583  455689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:03:09.061611  455689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:03:09.061664  455689 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.pause-383163 san=[127.0.0.1 192.168.85.2 localhost minikube pause-383163]
	I1009 20:03:09.935554  455689 provision.go:177] copyRemoteCerts
	I1009 20:03:09.935633  455689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:03:09.935674  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:09.953542  455689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33391 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/pause-383163/id_rsa Username:docker}
	I1009 20:03:10.065752  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:03:10.086788  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1009 20:03:10.107309  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:03:10.127136  455689 provision.go:87] duration metric: took 1.085197762s to configureAuth
	I1009 20:03:10.127162  455689 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:03:10.127390  455689 config.go:182] Loaded profile config "pause-383163": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:03:10.127543  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:10.145742  455689 main.go:141] libmachine: Using SSH client type: native
	I1009 20:03:10.146041  455689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33391 <nil> <nil>}
	I1009 20:03:10.146060  455689 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:03:10.528528  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:10.528986  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:10.529041  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:10.529132  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:10.555687  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:10.555711  439734 cri.go:89] found id: ""
	I1009 20:03:10.555720  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:10.555781  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:10.559489  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:10.559573  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:10.585018  439734 cri.go:89] found id: ""
	I1009 20:03:10.585040  439734 logs.go:282] 0 containers: []
	W1009 20:03:10.585048  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:10.585055  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:10.585171  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:10.616457  439734 cri.go:89] found id: ""
	I1009 20:03:10.616483  439734 logs.go:282] 0 containers: []
	W1009 20:03:10.616492  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:10.616499  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:10.616562  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:10.645807  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:10.645833  439734 cri.go:89] found id: ""
	I1009 20:03:10.645842  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:10.645902  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:10.649727  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:10.649798  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:10.680924  439734 cri.go:89] found id: ""
	I1009 20:03:10.680947  439734 logs.go:282] 0 containers: []
	W1009 20:03:10.680956  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:10.680963  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:10.681022  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:10.713037  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:10.713062  439734 cri.go:89] found id: "7fd5bbac345d9ddc389dd6302ea7d94b7b1cdec1b99f5b507688eff890a69a9f"
	I1009 20:03:10.713067  439734 cri.go:89] found id: ""
	I1009 20:03:10.713075  439734 logs.go:282] 2 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024 7fd5bbac345d9ddc389dd6302ea7d94b7b1cdec1b99f5b507688eff890a69a9f]
	I1009 20:03:10.713169  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:10.717175  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:10.720768  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:10.720844  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:10.747632  439734 cri.go:89] found id: ""
	I1009 20:03:10.747656  439734 logs.go:282] 0 containers: []
	W1009 20:03:10.747665  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:10.747672  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:10.747734  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:10.777513  439734 cri.go:89] found id: ""
	I1009 20:03:10.777579  439734 logs.go:282] 0 containers: []
	W1009 20:03:10.777603  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:10.777638  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:10.777673  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:10.893125  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:10.893208  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:10.927850  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:10.927928  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:10.959523  439734 logs.go:123] Gathering logs for kube-controller-manager [7fd5bbac345d9ddc389dd6302ea7d94b7b1cdec1b99f5b507688eff890a69a9f] ...
	I1009 20:03:10.959552  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7fd5bbac345d9ddc389dd6302ea7d94b7b1cdec1b99f5b507688eff890a69a9f"
	I1009 20:03:10.988034  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:10.988063  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:03:11.029070  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:11.029095  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:11.045831  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:11.045858  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:11.124372  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:11.124394  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:11.124412  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:11.194997  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:11.195031  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:13.759016  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:13.759468  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:13.759544  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:13.759622  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:13.787152  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:13.787171  439734 cri.go:89] found id: ""
	I1009 20:03:13.787180  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:13.787238  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:13.791201  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:13.791283  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:13.817976  439734 cri.go:89] found id: ""
	I1009 20:03:13.817998  439734 logs.go:282] 0 containers: []
	W1009 20:03:13.818007  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:13.818013  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:13.818072  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:13.848367  439734 cri.go:89] found id: ""
	I1009 20:03:13.848390  439734 logs.go:282] 0 containers: []
	W1009 20:03:13.848400  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:13.848407  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:13.848468  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:13.875857  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:13.875877  439734 cri.go:89] found id: ""
	I1009 20:03:13.875885  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:13.875943  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:13.879668  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:13.879749  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:13.910409  439734 cri.go:89] found id: ""
	I1009 20:03:13.910438  439734 logs.go:282] 0 containers: []
	W1009 20:03:13.910456  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:13.910464  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:13.910539  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:13.937483  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:13.937511  439734 cri.go:89] found id: ""
	I1009 20:03:13.937519  439734 logs.go:282] 1 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024]
	I1009 20:03:13.937582  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:13.941357  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:13.941439  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:13.966729  439734 cri.go:89] found id: ""
	I1009 20:03:13.966798  439734 logs.go:282] 0 containers: []
	W1009 20:03:13.966814  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:13.966821  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:13.966886  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:13.994594  439734 cri.go:89] found id: ""
	I1009 20:03:13.994621  439734 logs.go:282] 0 containers: []
	W1009 20:03:13.994629  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:13.994639  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:13.994651  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:14.113139  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:14.113175  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:14.131659  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:14.131751  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:14.197906  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:14.197934  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:14.197948  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:14.230749  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:14.230783  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:14.294517  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:14.294552  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:14.319997  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:14.320027  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:14.380396  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:14.380432  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:03:15.509588  455689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:03:15.509610  455689 machine.go:96] duration metric: took 6.997512054s to provisionDockerMachine
	I1009 20:03:15.509621  455689 start.go:294] postStartSetup for "pause-383163" (driver="docker")
	I1009 20:03:15.509632  455689 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:03:15.509698  455689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:03:15.509768  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:15.534469  455689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33391 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/pause-383163/id_rsa Username:docker}
	I1009 20:03:15.641811  455689 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:03:15.645640  455689 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:03:15.645670  455689 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:03:15.645683  455689 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:03:15.645741  455689 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:03:15.645828  455689 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:03:15.645947  455689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:03:15.653866  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:03:15.672631  455689 start.go:297] duration metric: took 162.99285ms for postStartSetup
	I1009 20:03:15.672712  455689 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:03:15.672751  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:15.690391  455689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33391 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/pause-383163/id_rsa Username:docker}
	I1009 20:03:15.790653  455689 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:03:15.795828  455689 fix.go:57] duration metric: took 7.304654598s for fixHost
	I1009 20:03:15.795856  455689 start.go:84] releasing machines lock for "pause-383163", held for 7.3047083s
	I1009 20:03:15.795951  455689 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-383163
	I1009 20:03:15.812914  455689 ssh_runner.go:195] Run: cat /version.json
	I1009 20:03:15.812982  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:15.813308  455689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:03:15.813385  455689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-383163
	I1009 20:03:15.832355  455689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33391 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/pause-383163/id_rsa Username:docker}
	I1009 20:03:15.835282  455689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33391 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/pause-383163/id_rsa Username:docker}
	I1009 20:03:16.028971  455689 ssh_runner.go:195] Run: systemctl --version
	I1009 20:03:16.039802  455689 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:03:16.080280  455689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:03:16.084943  455689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:03:16.085023  455689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:03:16.094064  455689 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 20:03:16.094090  455689 start.go:496] detecting cgroup driver to use...
	I1009 20:03:16.094123  455689 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 20:03:16.094177  455689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:03:16.110478  455689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:03:16.124097  455689 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:03:16.124207  455689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:03:16.140440  455689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:03:16.154224  455689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:03:16.300492  455689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:03:16.443047  455689 docker.go:234] disabling docker service ...
	I1009 20:03:16.443119  455689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:03:16.459272  455689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:03:16.473010  455689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:03:16.609747  455689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:03:16.755384  455689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:03:16.770269  455689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:03:16.785660  455689 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:03:16.785780  455689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:16.794833  455689 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:03:16.794906  455689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:16.804066  455689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:16.813911  455689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:16.823825  455689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:03:16.832886  455689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:16.842559  455689 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:16.851193  455689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:03:16.860209  455689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:03:16.867891  455689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:03:16.875257  455689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:03:17.030293  455689 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:03:17.248673  455689 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:03:17.248779  455689 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:03:17.253142  455689 start.go:564] Will wait 60s for crictl version
	I1009 20:03:17.253230  455689 ssh_runner.go:195] Run: which crictl
	I1009 20:03:17.257094  455689 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:03:17.299453  455689 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:03:17.299580  455689 ssh_runner.go:195] Run: crio --version
	I1009 20:03:17.348193  455689 ssh_runner.go:195] Run: crio --version
	I1009 20:03:17.386285  455689 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 20:03:17.389356  455689 cli_runner.go:164] Run: docker network inspect pause-383163 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:03:17.420125  455689 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 20:03:17.424909  455689 kubeadm.go:883] updating cluster {Name:pause-383163 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-383163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:03:17.425064  455689 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:03:17.425249  455689 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:03:17.467653  455689 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:03:17.467680  455689 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:03:17.467739  455689 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:03:17.502352  455689 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:03:17.502379  455689 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:03:17.502390  455689 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1009 20:03:17.502505  455689 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-383163 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-383163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:03:17.502597  455689 ssh_runner.go:195] Run: crio config
	I1009 20:03:17.583996  455689 cni.go:84] Creating CNI manager for ""
	I1009 20:03:17.584021  455689 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:03:17.584039  455689 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 20:03:17.584077  455689 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-383163 NodeName:pause-383163 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:03:17.584225  455689 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-383163"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:03:17.584308  455689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:03:17.593204  455689 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:03:17.593272  455689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:03:17.603503  455689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1009 20:03:17.619729  455689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:03:17.635589  455689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1009 20:03:17.653546  455689 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:03:17.659536  455689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:03:17.829050  455689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:03:17.844490  455689 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163 for IP: 192.168.85.2
	I1009 20:03:17.844514  455689 certs.go:195] generating shared ca certs ...
	I1009 20:03:17.844532  455689 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:03:17.844682  455689 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:03:17.844729  455689 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:03:17.844741  455689 certs.go:257] generating profile certs ...
	I1009 20:03:17.844826  455689 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/client.key
	I1009 20:03:17.844960  455689 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/apiserver.key.9a25b576
	I1009 20:03:17.845009  455689 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/proxy-client.key
	I1009 20:03:17.845216  455689 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:03:17.845253  455689 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:03:17.845262  455689 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:03:17.845294  455689 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:03:17.845327  455689 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:03:17.845350  455689 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:03:17.845395  455689 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:03:17.846000  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:03:17.866561  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:03:17.886121  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:03:17.904350  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:03:17.923310  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 20:03:17.942166  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:03:17.961341  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:03:17.980357  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:03:18.014856  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:03:18.035380  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:03:18.054991  455689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:03:18.074414  455689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:03:18.088506  455689 ssh_runner.go:195] Run: openssl version
	I1009 20:03:18.095400  455689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:03:18.104323  455689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:03:18.108401  455689 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:03:18.108474  455689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:03:18.149576  455689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:03:18.157705  455689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:03:18.166360  455689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:03:18.170445  455689 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:03:18.170514  455689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:03:18.211684  455689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:03:18.220071  455689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:03:18.228634  455689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:03:18.232358  455689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:03:18.232454  455689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:03:18.273443  455689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:03:18.282129  455689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:03:18.286218  455689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:03:18.327950  455689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:03:18.373923  455689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:03:18.417931  455689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:03:18.470240  455689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:03:18.581945  455689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:03:18.655086  455689 kubeadm.go:400] StartCluster: {Name:pause-383163 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-383163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:03:18.655254  455689 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:03:18.655350  455689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:03:18.754537  455689 cri.go:89] found id: "a167ee63efd53a4275e3e7873bad1603ebe7e8a31f0dd3198756d4b1f148e52a"
	I1009 20:03:18.754609  455689 cri.go:89] found id: "75d7f10be8c3e4bdde6c5890b28343819891d67d2e73eb06f38c47013ae3a3cb"
	I1009 20:03:18.754630  455689 cri.go:89] found id: "4a07552f3446603a46059c12e8713e08b798083b8d17d79c386bb391fc8c893c"
	I1009 20:03:18.754651  455689 cri.go:89] found id: "b9eb2f7f088ee645099c5cd4b8e1f669da2435cd313728f2bfbfc759ff9937b6"
	I1009 20:03:18.754686  455689 cri.go:89] found id: "a3e1d7ac8b25781dbad544fea22784db5fdb0f4de80670ff5f131dc3cc536739"
	I1009 20:03:18.754712  455689 cri.go:89] found id: "8b7b5b8265013e32789ed2351787ae158830229a757a1bc103a4456924b76035"
	I1009 20:03:18.754731  455689 cri.go:89] found id: "8ab6890c2164d0b6bbc82e2679dbd67b5dfe706686726cd94224aaf22c16f80f"
	I1009 20:03:18.754766  455689 cri.go:89] found id: "bb3480137661617835e8f2461eda88ed8e0afcd207648cb4d703a117457533cf"
	I1009 20:03:18.754789  455689 cri.go:89] found id: "5b2ba970850f91ec7dc47036664e17c95915f9f4e974dfe18f12c57f19dc05a3"
	I1009 20:03:18.754814  455689 cri.go:89] found id: "715315fe8199656e0b35e6405a491d6927104742238ac1c9811ad467110e9936"
	I1009 20:03:18.754852  455689 cri.go:89] found id: "f8690925bda20a05089b5b66d446d2a265402cbc16285b44139837240ca69a30"
	I1009 20:03:18.754876  455689 cri.go:89] found id: "5f2a4c1ed909bfd58b69d5787042aa91a6c0d43e3eef176ba2274c648fad521a"
	I1009 20:03:18.754896  455689 cri.go:89] found id: "a58cd421c4789d3b1e15645239af493035656079a6c5a7405c605212d4f12db9"
	I1009 20:03:18.754933  455689 cri.go:89] found id: ""
	I1009 20:03:18.755024  455689 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 20:03:18.781184  455689 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:03:18Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:03:18.781349  455689 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:03:18.801492  455689 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 20:03:18.801562  455689 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 20:03:18.801652  455689 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:03:18.823181  455689 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:03:18.823950  455689 kubeconfig.go:125] found "pause-383163" server: "https://192.168.85.2:8443"
	I1009 20:03:18.824970  455689 kapi.go:59] client config for pause-383163: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/client.key", CAFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 20:03:18.825630  455689 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 20:03:18.825730  455689 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 20:03:18.825766  455689 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 20:03:18.825792  455689 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 20:03:18.825817  455689 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 20:03:18.826251  455689 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:03:18.837940  455689 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1009 20:03:18.837975  455689 kubeadm.go:601] duration metric: took 36.395177ms to restartPrimaryControlPlane
	I1009 20:03:18.837985  455689 kubeadm.go:402] duration metric: took 182.909493ms to StartCluster
	I1009 20:03:18.838000  455689 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:03:18.838076  455689 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:03:18.838997  455689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:03:18.839240  455689 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:03:18.839569  455689 config.go:182] Loaded profile config "pause-383163": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:03:18.839618  455689 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:03:18.843237  455689 out.go:179] * Verifying Kubernetes components...
	I1009 20:03:18.843313  455689 out.go:179] * Enabled addons: 
	I1009 20:03:16.918395  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:16.918787  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:16.918840  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:16.918901  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:16.958626  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:16.958644  439734 cri.go:89] found id: ""
	I1009 20:03:16.958652  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:16.958712  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:16.962685  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:16.962753  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:16.999982  439734 cri.go:89] found id: ""
	I1009 20:03:17.000009  439734 logs.go:282] 0 containers: []
	W1009 20:03:17.000019  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:17.000027  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:17.000100  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:17.041618  439734 cri.go:89] found id: ""
	I1009 20:03:17.041641  439734 logs.go:282] 0 containers: []
	W1009 20:03:17.041736  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:17.041744  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:17.041804  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:17.073489  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:17.073509  439734 cri.go:89] found id: ""
	I1009 20:03:17.073517  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:17.073578  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:17.077501  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:17.077574  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:17.120759  439734 cri.go:89] found id: ""
	I1009 20:03:17.120782  439734 logs.go:282] 0 containers: []
	W1009 20:03:17.120792  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:17.120799  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:17.120897  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:17.153701  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:17.153722  439734 cri.go:89] found id: ""
	I1009 20:03:17.153730  439734 logs.go:282] 1 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024]
	I1009 20:03:17.153791  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:17.158367  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:17.158509  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:17.195790  439734 cri.go:89] found id: ""
	I1009 20:03:17.195865  439734 logs.go:282] 0 containers: []
	W1009 20:03:17.195892  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:17.195919  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:17.196001  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:17.235498  439734 cri.go:89] found id: ""
	I1009 20:03:17.235578  439734 logs.go:282] 0 containers: []
	W1009 20:03:17.235601  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:17.235630  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:17.235664  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:17.288386  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:17.288460  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:17.362194  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:17.362268  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:17.400210  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:17.400235  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:17.470776  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:17.470809  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:03:17.506052  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:17.506083  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:17.651048  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:17.651094  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:17.669703  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:17.669733  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:17.773879  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:18.846346  455689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:03:18.846493  455689 addons.go:514] duration metric: took 6.86701ms for enable addons: enabled=[]
	I1009 20:03:19.095247  455689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:03:19.114131  455689 node_ready.go:35] waiting up to 6m0s for node "pause-383163" to be "Ready" ...
	I1009 20:03:23.206100  455689 node_ready.go:49] node "pause-383163" is "Ready"
	I1009 20:03:23.206131  455689 node_ready.go:38] duration metric: took 4.091971451s for node "pause-383163" to be "Ready" ...
	I1009 20:03:23.206146  455689 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:03:23.206209  455689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:03:23.226115  455689 api_server.go:72] duration metric: took 4.386839053s to wait for apiserver process to appear ...
	I1009 20:03:23.226141  455689 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:03:23.226162  455689 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 20:03:23.236894  455689 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:03:23.236925  455689 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:03:20.274700  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:20.275064  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:20.275104  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:20.275158  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:20.319984  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:20.320003  439734 cri.go:89] found id: ""
	I1009 20:03:20.320013  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:20.320075  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:20.329751  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:20.329827  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:20.372154  439734 cri.go:89] found id: ""
	I1009 20:03:20.372177  439734 logs.go:282] 0 containers: []
	W1009 20:03:20.372186  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:20.372193  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:20.372254  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:20.414446  439734 cri.go:89] found id: ""
	I1009 20:03:20.414468  439734 logs.go:282] 0 containers: []
	W1009 20:03:20.414477  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:20.414483  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:20.414549  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:20.469103  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:20.469169  439734 cri.go:89] found id: ""
	I1009 20:03:20.469177  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:20.469239  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:20.476540  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:20.476621  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:20.528340  439734 cri.go:89] found id: ""
	I1009 20:03:20.528362  439734 logs.go:282] 0 containers: []
	W1009 20:03:20.528371  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:20.528377  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:20.528442  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:20.574587  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:20.574664  439734 cri.go:89] found id: ""
	I1009 20:03:20.574688  439734 logs.go:282] 1 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024]
	I1009 20:03:20.574780  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:20.581646  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:20.581796  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:20.644312  439734 cri.go:89] found id: ""
	I1009 20:03:20.644391  439734 logs.go:282] 0 containers: []
	W1009 20:03:20.644417  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:20.644455  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:20.644542  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:20.692122  439734 cri.go:89] found id: ""
	I1009 20:03:20.692201  439734 logs.go:282] 0 containers: []
	W1009 20:03:20.692225  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:20.692266  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:20.692297  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:20.790226  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:20.790308  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:03:20.837868  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:20.837944  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:20.987972  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:20.988062  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:21.011346  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:21.011372  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:21.148357  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:21.148421  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:21.148451  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:21.204135  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:21.204208  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:21.311602  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:21.311682  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:23.843113  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:23.843528  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:23.843567  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:23.843625  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:23.888654  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:23.888675  439734 cri.go:89] found id: ""
	I1009 20:03:23.888684  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:23.888742  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:23.894563  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:23.894679  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:23.967675  439734 cri.go:89] found id: ""
	I1009 20:03:23.967713  439734 logs.go:282] 0 containers: []
	W1009 20:03:23.967722  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:23.967728  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:23.967803  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:24.016665  439734 cri.go:89] found id: ""
	I1009 20:03:24.016735  439734 logs.go:282] 0 containers: []
	W1009 20:03:24.016764  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:24.016791  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:24.016919  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:24.065029  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:24.065095  439734 cri.go:89] found id: ""
	I1009 20:03:24.065189  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:24.065277  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:24.072497  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:24.072618  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:24.140643  439734 cri.go:89] found id: ""
	I1009 20:03:24.140710  439734 logs.go:282] 0 containers: []
	W1009 20:03:24.140739  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:24.140785  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:24.140877  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:24.185805  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:24.185839  439734 cri.go:89] found id: ""
	I1009 20:03:24.185848  439734 logs.go:282] 1 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024]
	I1009 20:03:24.185915  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:24.190238  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:24.190323  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:24.237902  439734 cri.go:89] found id: ""
	I1009 20:03:24.237976  439734 logs.go:282] 0 containers: []
	W1009 20:03:24.238002  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:24.238040  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:24.238119  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:24.279737  439734 cri.go:89] found id: ""
	I1009 20:03:24.279813  439734 logs.go:282] 0 containers: []
	W1009 20:03:24.279837  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:24.279887  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:24.279931  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:24.307158  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:24.307252  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:24.400052  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:24.400125  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:24.400156  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:24.456710  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:24.456785  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:24.547322  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:24.547404  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:24.594512  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:24.594536  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:24.663315  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:24.663352  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:03:24.699592  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:24.699668  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:23.726364  455689 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 20:03:23.737081  455689 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:03:23.737120  455689 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:03:24.226693  455689 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 20:03:24.236295  455689 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:03:24.236323  455689 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:03:24.726728  455689 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 20:03:24.735617  455689 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 20:03:24.736896  455689 api_server.go:141] control plane version: v1.34.1
	I1009 20:03:24.736927  455689 api_server.go:131] duration metric: took 1.510777921s to wait for apiserver health ...
	I1009 20:03:24.736937  455689 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:03:24.741734  455689 system_pods.go:59] 7 kube-system pods found
	I1009 20:03:24.741779  455689 system_pods.go:61] "coredns-66bc5c9577-kj4l8" [9347b1f1-06ba-4612-96e4-9f5e09ba2500] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:03:24.741791  455689 system_pods.go:61] "etcd-pause-383163" [cfe74798-561d-4053-91e7-db47e37cad9e] Running
	I1009 20:03:24.741802  455689 system_pods.go:61] "kindnet-2blxf" [2bcb5c94-b301-4db9-bcf2-5f6eba8b07c7] Running
	I1009 20:03:24.741815  455689 system_pods.go:61] "kube-apiserver-pause-383163" [5c811666-93d0-42aa-a0c6-151265e26643] Running
	I1009 20:03:24.741824  455689 system_pods.go:61] "kube-controller-manager-pause-383163" [67d8f6ea-5f5c-4aae-8d70-04400ce570be] Running
	I1009 20:03:24.741830  455689 system_pods.go:61] "kube-proxy-9k7j8" [b521ebd5-2359-4c44-9357-f2ac6cdd9719] Running
	I1009 20:03:24.741836  455689 system_pods.go:61] "kube-scheduler-pause-383163" [04e13cb2-6a0b-457c-89b7-e7dbfe30a206] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:03:24.741846  455689 system_pods.go:74] duration metric: took 4.897279ms to wait for pod list to return data ...
	I1009 20:03:24.741855  455689 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:03:24.745025  455689 default_sa.go:45] found service account: "default"
	I1009 20:03:24.745048  455689 default_sa.go:55] duration metric: took 3.183016ms for default service account to be created ...
	I1009 20:03:24.745058  455689 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:03:24.749369  455689 system_pods.go:86] 7 kube-system pods found
	I1009 20:03:24.749413  455689 system_pods.go:89] "coredns-66bc5c9577-kj4l8" [9347b1f1-06ba-4612-96e4-9f5e09ba2500] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:03:24.749423  455689 system_pods.go:89] "etcd-pause-383163" [cfe74798-561d-4053-91e7-db47e37cad9e] Running
	I1009 20:03:24.749435  455689 system_pods.go:89] "kindnet-2blxf" [2bcb5c94-b301-4db9-bcf2-5f6eba8b07c7] Running
	I1009 20:03:24.749440  455689 system_pods.go:89] "kube-apiserver-pause-383163" [5c811666-93d0-42aa-a0c6-151265e26643] Running
	I1009 20:03:24.749444  455689 system_pods.go:89] "kube-controller-manager-pause-383163" [67d8f6ea-5f5c-4aae-8d70-04400ce570be] Running
	I1009 20:03:24.749455  455689 system_pods.go:89] "kube-proxy-9k7j8" [b521ebd5-2359-4c44-9357-f2ac6cdd9719] Running
	I1009 20:03:24.749461  455689 system_pods.go:89] "kube-scheduler-pause-383163" [04e13cb2-6a0b-457c-89b7-e7dbfe30a206] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:03:24.749477  455689 system_pods.go:126] duration metric: took 4.404516ms to wait for k8s-apps to be running ...
	I1009 20:03:24.749490  455689 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:03:24.749556  455689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:03:24.767849  455689 system_svc.go:56] duration metric: took 18.350123ms WaitForService to wait for kubelet
	I1009 20:03:24.767878  455689 kubeadm.go:586] duration metric: took 5.928605905s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:03:24.767897  455689 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:03:24.771873  455689 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:03:24.771907  455689 node_conditions.go:123] node cpu capacity is 2
	I1009 20:03:24.771920  455689 node_conditions.go:105] duration metric: took 4.007794ms to run NodePressure ...
	I1009 20:03:24.771937  455689 start.go:242] waiting for startup goroutines ...
	I1009 20:03:24.771959  455689 start.go:247] waiting for cluster config update ...
	I1009 20:03:24.771968  455689 start.go:256] writing updated cluster config ...
	I1009 20:03:24.773297  455689 ssh_runner.go:195] Run: rm -f paused
	I1009 20:03:24.779762  455689 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:03:24.780608  455689 kapi.go:59] client config for pause-383163: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/profiles/pause-383163/client.key", CAFile:"/home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 20:03:24.784257  455689 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kj4l8" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 20:03:26.789657  455689 pod_ready.go:104] pod "coredns-66bc5c9577-kj4l8" is not "Ready", error: <nil>
	I1009 20:03:27.326059  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:27.326485  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:27.326533  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:27.326593  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:27.354045  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:27.354067  439734 cri.go:89] found id: ""
	I1009 20:03:27.354075  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:27.354134  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:27.357771  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:27.357850  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:27.385231  439734 cri.go:89] found id: ""
	I1009 20:03:27.385255  439734 logs.go:282] 0 containers: []
	W1009 20:03:27.385264  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:27.385270  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:27.385330  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:27.412550  439734 cri.go:89] found id: ""
	I1009 20:03:27.412576  439734 logs.go:282] 0 containers: []
	W1009 20:03:27.412585  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:27.412592  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:27.412650  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:27.440052  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:27.440076  439734 cri.go:89] found id: ""
	I1009 20:03:27.440091  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:27.440153  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:27.443975  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:27.444050  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:27.472498  439734 cri.go:89] found id: ""
	I1009 20:03:27.472522  439734 logs.go:282] 0 containers: []
	W1009 20:03:27.472531  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:27.472544  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:27.472605  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:27.499499  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:27.499535  439734 cri.go:89] found id: ""
	I1009 20:03:27.499544  439734 logs.go:282] 1 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024]
	I1009 20:03:27.499653  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:27.503785  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:27.503905  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:27.532469  439734 cri.go:89] found id: ""
	I1009 20:03:27.532509  439734 logs.go:282] 0 containers: []
	W1009 20:03:27.532519  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:27.532542  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:27.532630  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:27.559782  439734 cri.go:89] found id: ""
	I1009 20:03:27.559808  439734 logs.go:282] 0 containers: []
	W1009 20:03:27.559817  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:27.559826  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:27.559837  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:27.681078  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:27.681119  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:27.698634  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:27.698663  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:27.774623  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:27.774646  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:27.774659  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:27.811575  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:27.811608  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:27.885299  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:27.885344  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:27.915445  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:27.915471  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:27.978009  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:27.978051  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 20:03:28.790130  455689 pod_ready.go:104] pod "coredns-66bc5c9577-kj4l8" is not "Ready", error: <nil>
	I1009 20:03:30.790243  455689 pod_ready.go:94] pod "coredns-66bc5c9577-kj4l8" is "Ready"
	I1009 20:03:30.790265  455689 pod_ready.go:86] duration metric: took 6.005982193s for pod "coredns-66bc5c9577-kj4l8" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:30.793797  455689 pod_ready.go:83] waiting for pod "etcd-pause-383163" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:31.299907  455689 pod_ready.go:94] pod "etcd-pause-383163" is "Ready"
	I1009 20:03:31.299929  455689 pod_ready.go:86] duration metric: took 506.108769ms for pod "etcd-pause-383163" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:31.303568  455689 pod_ready.go:83] waiting for pod "kube-apiserver-pause-383163" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:31.809364  455689 pod_ready.go:94] pod "kube-apiserver-pause-383163" is "Ready"
	I1009 20:03:31.809395  455689 pod_ready.go:86] duration metric: took 505.805473ms for pod "kube-apiserver-pause-383163" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:31.812011  455689 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-383163" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 20:03:33.820288  455689 pod_ready.go:104] pod "kube-controller-manager-pause-383163" is not "Ready", error: <nil>
	I1009 20:03:34.321975  455689 pod_ready.go:94] pod "kube-controller-manager-pause-383163" is "Ready"
	I1009 20:03:34.321998  455689 pod_ready.go:86] duration metric: took 2.509962833s for pod "kube-controller-manager-pause-383163" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:34.326984  455689 pod_ready.go:83] waiting for pod "kube-proxy-9k7j8" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:34.391574  455689 pod_ready.go:94] pod "kube-proxy-9k7j8" is "Ready"
	I1009 20:03:34.391602  455689 pod_ready.go:86] duration metric: took 64.59613ms for pod "kube-proxy-9k7j8" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:34.587930  455689 pod_ready.go:83] waiting for pod "kube-scheduler-pause-383163" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:34.988313  455689 pod_ready.go:94] pod "kube-scheduler-pause-383163" is "Ready"
	I1009 20:03:34.988339  455689 pod_ready.go:86] duration metric: took 400.377959ms for pod "kube-scheduler-pause-383163" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:03:34.988351  455689 pod_ready.go:40] duration metric: took 10.208558261s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:03:35.058739  455689 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:03:35.062049  455689 out.go:179] * Done! kubectl is now configured to use "pause-383163" cluster and "default" namespace by default
	I1009 20:03:30.516358  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:30.516882  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:30.516933  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:30.516990  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:30.562407  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:30.562427  439734 cri.go:89] found id: ""
	I1009 20:03:30.562435  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:30.562495  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:30.568119  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:30.568192  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:30.612048  439734 cri.go:89] found id: ""
	I1009 20:03:30.612071  439734 logs.go:282] 0 containers: []
	W1009 20:03:30.612079  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:30.612086  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:30.612145  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:30.657850  439734 cri.go:89] found id: ""
	I1009 20:03:30.657879  439734 logs.go:282] 0 containers: []
	W1009 20:03:30.657889  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:30.657896  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:30.657958  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:30.686208  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:30.686300  439734 cri.go:89] found id: ""
	I1009 20:03:30.686326  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:30.686422  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:30.690774  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:30.690877  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:30.718329  439734 cri.go:89] found id: ""
	I1009 20:03:30.718358  439734 logs.go:282] 0 containers: []
	W1009 20:03:30.718367  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:30.718374  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:30.718439  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:30.746795  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:30.746818  439734 cri.go:89] found id: ""
	I1009 20:03:30.746838  439734 logs.go:282] 1 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024]
	I1009 20:03:30.746917  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:30.750792  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:30.750868  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:30.778809  439734 cri.go:89] found id: ""
	I1009 20:03:30.778876  439734 logs.go:282] 0 containers: []
	W1009 20:03:30.778906  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:30.778921  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:30.778996  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:30.810367  439734 cri.go:89] found id: ""
	I1009 20:03:30.810391  439734 logs.go:282] 0 containers: []
	W1009 20:03:30.810399  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:30.810408  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:30.810440  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:30.826971  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:30.827006  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:30.902605  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:30.902627  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:30.902640  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:30.934787  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:30.934816  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:31.010141  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:31.010211  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:31.038135  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:31.038164  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:31.102981  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:31.103026  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:03:31.148932  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:31.148965  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:33.772536  439734 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:03:33.772945  439734 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 20:03:33.772985  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:03:33.773040  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:03:33.800859  439734 cri.go:89] found id: "d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:33.800880  439734 cri.go:89] found id: ""
	I1009 20:03:33.800888  439734 logs.go:282] 1 containers: [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6]
	I1009 20:03:33.800947  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:33.804697  439734 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:03:33.804793  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:03:33.833302  439734 cri.go:89] found id: ""
	I1009 20:03:33.833329  439734 logs.go:282] 0 containers: []
	W1009 20:03:33.833338  439734 logs.go:284] No container was found matching "etcd"
	I1009 20:03:33.833345  439734 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:03:33.833452  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:03:33.859581  439734 cri.go:89] found id: ""
	I1009 20:03:33.859653  439734 logs.go:282] 0 containers: []
	W1009 20:03:33.859677  439734 logs.go:284] No container was found matching "coredns"
	I1009 20:03:33.859703  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:03:33.859803  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:03:33.888506  439734 cri.go:89] found id: "860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	I1009 20:03:33.888569  439734 cri.go:89] found id: ""
	I1009 20:03:33.888593  439734 logs.go:282] 1 containers: [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c]
	I1009 20:03:33.888680  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:33.892751  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:03:33.892855  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:03:33.919854  439734 cri.go:89] found id: ""
	I1009 20:03:33.919881  439734 logs.go:282] 0 containers: []
	W1009 20:03:33.919891  439734 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:03:33.919898  439734 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:03:33.919961  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:03:33.946775  439734 cri.go:89] found id: "56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:33.946805  439734 cri.go:89] found id: ""
	I1009 20:03:33.946814  439734 logs.go:282] 1 containers: [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024]
	I1009 20:03:33.946871  439734 ssh_runner.go:195] Run: which crictl
	I1009 20:03:33.950664  439734 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:03:33.950771  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:03:33.977950  439734 cri.go:89] found id: ""
	I1009 20:03:33.977974  439734 logs.go:282] 0 containers: []
	W1009 20:03:33.977984  439734 logs.go:284] No container was found matching "kindnet"
	I1009 20:03:33.977992  439734 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:03:33.978055  439734 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:03:34.010676  439734 cri.go:89] found id: ""
	I1009 20:03:34.010702  439734 logs.go:282] 0 containers: []
	W1009 20:03:34.010711  439734 logs.go:284] No container was found matching "storage-provisioner"
	I1009 20:03:34.010720  439734 logs.go:123] Gathering logs for kube-controller-manager [56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024] ...
	I1009 20:03:34.010732  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 56dd658f7c0543e07581d314639507f53947ef9a0096c18a5f0d9d2205eee024"
	I1009 20:03:34.039185  439734 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:03:34.039213  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:03:34.107412  439734 logs.go:123] Gathering logs for container status ...
	I1009 20:03:34.107498  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:03:34.140878  439734 logs.go:123] Gathering logs for kubelet ...
	I1009 20:03:34.140909  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:03:34.261115  439734 logs.go:123] Gathering logs for dmesg ...
	I1009 20:03:34.261151  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:03:34.278332  439734 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:03:34.278361  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:03:34.348342  439734 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:03:34.348406  439734 logs.go:123] Gathering logs for kube-apiserver [d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6] ...
	I1009 20:03:34.348435  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d77ac3de8a7a39d648d1517d5fa734b7495e3f909d23628a34c76de3d5ef57f6"
	I1009 20:03:34.402366  439734 logs.go:123] Gathering logs for kube-scheduler [860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c] ...
	I1009 20:03:34.402399  439734 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 860dfe4c78e184a9fee28ff64c7bf737667b86888d84a7c16dc138690d6b945c"
	
	
	==> CRI-O <==
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.664937161Z" level=info msg="Created container b9eb2f7f088ee645099c5cd4b8e1f669da2435cd313728f2bfbfc759ff9937b6: kube-system/kube-controller-manager-pause-383163/kube-controller-manager" id=9e9aa368-c1a5-482d-bb44-9034b444823f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.704370312Z" level=info msg="Created container 75d7f10be8c3e4bdde6c5890b28343819891d67d2e73eb06f38c47013ae3a3cb: kube-system/kube-scheduler-pause-383163/kube-scheduler" id=8259f2e1-08e3-49bb-85d3-cb7907f7a2c8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.705009103Z" level=info msg="Started container" PID=2287 containerID=4a07552f3446603a46059c12e8713e08b798083b8d17d79c386bb391fc8c893c description=kube-system/kindnet-2blxf/kindnet-cni id=d8d3b118-65f8-4878-a3ea-f6858893b427 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9c46ab11aac900411bcd9c2764ae1a2bebd9f21c37a51c6111dfd5acb0c37cf5
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.711295341Z" level=info msg="Starting container: 75d7f10be8c3e4bdde6c5890b28343819891d67d2e73eb06f38c47013ae3a3cb" id=fc03464a-6152-4779-bc4d-778ddacdbbbb name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.726078478Z" level=info msg="Starting container: b9eb2f7f088ee645099c5cd4b8e1f669da2435cd313728f2bfbfc759ff9937b6" id=7150753c-f53c-4efb-8206-f82c714843c6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.726911567Z" level=info msg="Started container" PID=2306 containerID=75d7f10be8c3e4bdde6c5890b28343819891d67d2e73eb06f38c47013ae3a3cb description=kube-system/kube-scheduler-pause-383163/kube-scheduler id=fc03464a-6152-4779-bc4d-778ddacdbbbb name=/runtime.v1.RuntimeService/StartContainer sandboxID=998e6ac6b4306db15080773b62d2695890106febbe23c40defc5e57810c30474
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.731174339Z" level=info msg="Created container a167ee63efd53a4275e3e7873bad1603ebe7e8a31f0dd3198756d4b1f148e52a: kube-system/kube-proxy-9k7j8/kube-proxy" id=41dfdd28-293d-4200-81e0-d7a35bbafb94 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.73861129Z" level=info msg="Started container" PID=2282 containerID=b9eb2f7f088ee645099c5cd4b8e1f669da2435cd313728f2bfbfc759ff9937b6 description=kube-system/kube-controller-manager-pause-383163/kube-controller-manager id=7150753c-f53c-4efb-8206-f82c714843c6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=003237e08d9c2c14de2734fd9c7353dd50598d8846315860434b3b53f542920b
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.746865946Z" level=info msg="Starting container: a167ee63efd53a4275e3e7873bad1603ebe7e8a31f0dd3198756d4b1f148e52a" id=6f8cc343-1e16-4d68-99d0-46a94b7de884 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.763967288Z" level=info msg="Created container cb4576736bbda7b31792b910061f810e74ebfe1099b49efb9c81dfdd2a1f445b: kube-system/kube-apiserver-pause-383163/kube-apiserver" id=d23689cb-aa19-4a6d-9844-f9469aecc5fb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.76467264Z" level=info msg="Starting container: cb4576736bbda7b31792b910061f810e74ebfe1099b49efb9c81dfdd2a1f445b" id=19c3c930-3571-4765-8e49-8de55b56e7ea name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.76587978Z" level=info msg="Started container" PID=2323 containerID=a167ee63efd53a4275e3e7873bad1603ebe7e8a31f0dd3198756d4b1f148e52a description=kube-system/kube-proxy-9k7j8/kube-proxy id=6f8cc343-1e16-4d68-99d0-46a94b7de884 name=/runtime.v1.RuntimeService/StartContainer sandboxID=955f14dcda9c8b6356d97c9f7ef3b7a84278561f49b1b2d33bccb16a0859e766
	Oct 09 20:03:18 pause-383163 crio[2039]: time="2025-10-09T20:03:18.768253026Z" level=info msg="Started container" PID=2303 containerID=cb4576736bbda7b31792b910061f810e74ebfe1099b49efb9c81dfdd2a1f445b description=kube-system/kube-apiserver-pause-383163/kube-apiserver id=19c3c930-3571-4765-8e49-8de55b56e7ea name=/runtime.v1.RuntimeService/StartContainer sandboxID=6abb6a0f31d585118179d190cdb532c3755dd34e4076ff29965d6fb14b07b7d8
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.977245329Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.981549947Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.9816122Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.981638153Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.984952108Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.984990607Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.985014361Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.9894617Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.989497819Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.989522222Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.992612479Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:03:28 pause-383163 crio[2039]: time="2025-10-09T20:03:28.99264845Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a167ee63efd53       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   21 seconds ago       Running             kube-proxy                1                   955f14dcda9c8       kube-proxy-9k7j8                       kube-system
	cb4576736bbda       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   21 seconds ago       Running             kube-apiserver            1                   6abb6a0f31d58       kube-apiserver-pause-383163            kube-system
	75d7f10be8c3e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   21 seconds ago       Running             kube-scheduler            1                   998e6ac6b4306       kube-scheduler-pause-383163            kube-system
	4a07552f34466       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   9c46ab11aac90       kindnet-2blxf                          kube-system
	b9eb2f7f088ee       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   21 seconds ago       Running             kube-controller-manager   1                   003237e08d9c2       kube-controller-manager-pause-383163   kube-system
	a3e1d7ac8b257       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   21 seconds ago       Running             etcd                      1                   9b1a12a666139       etcd-pause-383163                      kube-system
	8b7b5b8265013       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   c339fddc9b980       coredns-66bc5c9577-kj4l8               kube-system
	8ab6890c2164d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   34 seconds ago       Exited              coredns                   0                   c339fddc9b980       coredns-66bc5c9577-kj4l8               kube-system
	bb34801376616       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   955f14dcda9c8       kube-proxy-9k7j8                       kube-system
	5b2ba970850f9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   9c46ab11aac90       kindnet-2blxf                          kube-system
	715315fe81996       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   6abb6a0f31d58       kube-apiserver-pause-383163            kube-system
	f8690925bda20       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   003237e08d9c2       kube-controller-manager-pause-383163   kube-system
	5f2a4c1ed909b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   998e6ac6b4306       kube-scheduler-pause-383163            kube-system
	a58cd421c4789       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   9b1a12a666139       etcd-pause-383163                      kube-system
	
	
	==> coredns [8ab6890c2164d0b6bbc82e2679dbd67b5dfe706686726cd94224aaf22c16f80f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38501 - 49567 "HINFO IN 337892150647174265.2964964437038389437. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011932299s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8b7b5b8265013e32789ed2351787ae158830229a757a1bc103a4456924b76035] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52404 - 26900 "HINFO IN 3764607938427269200.9140827235858688719. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022767923s
	
	
	==> describe nodes <==
	Name:               pause-383163
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-383163
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=pause-383163
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T20_02_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 20:02:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-383163
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 20:03:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 20:03:05 +0000   Thu, 09 Oct 2025 20:02:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 20:03:05 +0000   Thu, 09 Oct 2025 20:02:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 20:03:05 +0000   Thu, 09 Oct 2025 20:02:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 20:03:05 +0000   Thu, 09 Oct 2025 20:03:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-383163
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a885f66b42d4c388ad3d29291a058dd
	  System UUID:                4da07121-3df8-4e47-9e7a-f63fa3550e7e
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-kj4l8                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     76s
	  kube-system                 etcd-pause-383163                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         81s
	  kube-system                 kindnet-2blxf                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      77s
	  kube-system                 kube-apiserver-pause-383163             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-pause-383163    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-9k7j8                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-pause-383163             100m (5%)     0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 75s   kube-proxy       
	  Normal   Starting                 17s   kube-proxy       
	  Normal   Starting                 81s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 81s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  81s   kubelet          Node pause-383163 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s   kubelet          Node pause-383163 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s   kubelet          Node pause-383163 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s   node-controller  Node pause-383163 event: Registered Node pause-383163 in Controller
	  Normal   NodeReady                35s   kubelet          Node pause-383163 status is now: NodeReady
	  Normal   RegisteredNode           14s   node-controller  Node pause-383163 event: Registered Node pause-383163 in Controller
	
	
	==> dmesg <==
	[  +3.297009] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:28] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:36] overlayfs: idmapped layers are currently not supported
	[  +4.492991] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:37] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:38] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:40] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:45] overlayfs: idmapped layers are currently not supported
	[ +36.012100] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:47] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:50] overlayfs: idmapped layers are currently not supported
	[ +27.967875] overlayfs: idmapped layers are currently not supported
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a3e1d7ac8b25781dbad544fea22784db5fdb0f4de80670ff5f131dc3cc536739] <==
	{"level":"warn","ts":"2025-10-09T20:03:21.375735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.391609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.434365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.461540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.480750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.501478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.515755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.539464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.556936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.575518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.642229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.677529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.715546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.757436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.775602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.803818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.847522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.872653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.913879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.919200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.939551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.963817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.986093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:21.999664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:03:22.120836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	
	
	==> etcd [a58cd421c4789d3b1e15645239af493035656079a6c5a7405c605212d4f12db9] <==
	{"level":"warn","ts":"2025-10-09T20:02:15.230802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:02:15.264631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:02:15.298613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:02:15.324680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:02:15.344005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:02:15.360557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:02:15.458828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51788","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-09T20:03:10.323359Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-09T20:03:10.323409Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-383163","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-09T20:03:10.323502Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-09T20:03:10.472425Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-09T20:03:10.472527Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-09T20:03:10.472551Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-09T20:03:10.472663Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-09T20:03:10.472685Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-09T20:03:10.472748Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-09T20:03:10.472826Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-09T20:03:10.472863Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-09T20:03:10.472962Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-09T20:03:10.472982Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-09T20:03:10.472992Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-09T20:03:10.475929Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-09T20:03:10.476027Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-09T20:03:10.476065Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-09T20:03:10.476074Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-383163","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 20:03:40 up  2:45,  0 user,  load average: 3.02, 2.57, 2.13
	Linux pause-383163 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4a07552f3446603a46059c12e8713e08b798083b8d17d79c386bb391fc8c893c] <==
	I1009 20:03:18.712390       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:03:18.716840       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 20:03:18.717036       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:03:18.717050       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:03:18.717070       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:03:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:03:18.976501       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:03:18.976536       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:03:18.976545       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:03:18.977358       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 20:03:23.331133       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1009 20:03:24.276875       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 20:03:24.276990       1 metrics.go:72] Registering metrics
	I1009 20:03:24.277093       1 controller.go:711] "Syncing nftables rules"
	I1009 20:03:28.976733       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 20:03:28.976795       1 main.go:301] handling current node
	I1009 20:03:38.976333       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 20:03:38.976380       1 main.go:301] handling current node
	
	
	==> kindnet [5b2ba970850f91ec7dc47036664e17c95915f9f4e974dfe18f12c57f19dc05a3] <==
	I1009 20:02:24.905278       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:02:24.905674       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 20:02:24.905828       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:02:24.905869       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:02:24.905909       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:02:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:02:25.105600       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:02:25.105680       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:02:25.105895       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:02:25.106558       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 20:02:55.106893       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 20:02:55.107177       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 20:02:55.107308       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1009 20:02:55.107439       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1009 20:02:56.607615       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 20:02:56.607664       1 metrics.go:72] Registering metrics
	I1009 20:02:56.607734       1 controller.go:711] "Syncing nftables rules"
	I1009 20:03:05.105611       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 20:03:05.105680       1 main.go:301] handling current node
	
	
	==> kube-apiserver [715315fe8199656e0b35e6405a491d6927104742238ac1c9811ad467110e9936] <==
	W1009 20:03:10.331024       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.331074       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.331122       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.331175       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.331220       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.331301       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.331350       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.331472       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.331625       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332413       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332468       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332517       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332558       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332594       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332636       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332679       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332720       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332773       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332811       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332849       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332904       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332945       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.332991       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:03:10.343103       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [cb4576736bbda7b31792b910061f810e74ebfe1099b49efb9c81dfdd2a1f445b] <==
	I1009 20:03:23.357373       1 policy_source.go:240] refreshing policies
	I1009 20:03:23.375794       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1009 20:03:23.375870       1 aggregator.go:171] initial CRD sync complete...
	I1009 20:03:23.375884       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 20:03:23.375893       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 20:03:23.375907       1 cache.go:39] Caches are synced for autoregister controller
	I1009 20:03:23.379870       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 20:03:23.387857       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 20:03:23.388142       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 20:03:23.388160       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 20:03:23.389412       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1009 20:03:23.389644       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1009 20:03:23.389697       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 20:03:23.394171       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 20:03:23.394479       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1009 20:03:23.394572       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 20:03:23.399383       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 20:03:23.399505       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1009 20:03:23.405664       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 20:03:23.903571       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:03:25.198009       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 20:03:26.642498       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 20:03:26.841005       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 20:03:26.893559       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 20:03:26.994182       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [b9eb2f7f088ee645099c5cd4b8e1f669da2435cd313728f2bfbfc759ff9937b6] <==
	I1009 20:03:26.590516       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1009 20:03:26.591668       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 20:03:26.593926       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1009 20:03:26.598325       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:03:26.598350       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 20:03:26.598357       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 20:03:26.603186       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 20:03:26.604158       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 20:03:26.618065       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 20:03:26.626650       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1009 20:03:26.626713       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 20:03:26.626750       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 20:03:26.626766       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 20:03:26.626773       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 20:03:26.629165       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 20:03:26.633728       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 20:03:26.633795       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 20:03:26.639182       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1009 20:03:26.639235       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 20:03:26.639414       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 20:03:26.639639       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 20:03:26.639951       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1009 20:03:26.645144       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 20:03:26.653769       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 20:03:26.656175       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [f8690925bda20a05089b5b66d446d2a265402cbc16285b44139837240ca69a30] <==
	I1009 20:02:23.331754       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1009 20:02:23.341479       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 20:02:23.341578       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1009 20:02:23.347899       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:02:23.349087       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 20:02:23.358497       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1009 20:02:23.358575       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 20:02:23.358611       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 20:02:23.358635       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 20:02:23.358641       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 20:02:23.365169       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 20:02:23.374069       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 20:02:23.378772       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 20:02:23.379257       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 20:02:23.379403       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 20:02:23.379496       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 20:02:23.379590       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1009 20:02:23.381861       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 20:02:23.384371       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1009 20:02:23.384490       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 20:02:23.384620       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1009 20:02:23.390607       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 20:02:23.392980       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1009 20:02:23.405942       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-383163" podCIDRs=["10.244.0.0/24"]
	I1009 20:03:08.338175       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a167ee63efd53a4275e3e7873bad1603ebe7e8a31f0dd3198756d4b1f148e52a] <==
	I1009 20:03:21.890582       1 server_linux.go:53] "Using iptables proxy"
	I1009 20:03:22.467785       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 20:03:23.368537       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 20:03:23.368701       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 20:03:23.368816       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:03:23.456158       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:03:23.456222       1 server_linux.go:132] "Using iptables Proxier"
	I1009 20:03:23.491801       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:03:23.492352       1 server.go:527] "Version info" version="v1.34.1"
	I1009 20:03:23.492386       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:03:23.493975       1 config.go:106] "Starting endpoint slice config controller"
	I1009 20:03:23.494061       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 20:03:23.494403       1 config.go:200] "Starting service config controller"
	I1009 20:03:23.494466       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 20:03:23.494844       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 20:03:23.494901       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 20:03:23.495488       1 config.go:309] "Starting node config controller"
	I1009 20:03:23.495570       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 20:03:23.495601       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 20:03:23.595328       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 20:03:23.595367       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1009 20:03:23.595410       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [bb3480137661617835e8f2461eda88ed8e0afcd207648cb4d703a117457533cf] <==
	I1009 20:02:24.920527       1 server_linux.go:53] "Using iptables proxy"
	I1009 20:02:25.019031       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 20:02:25.119796       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 20:02:25.119833       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 20:02:25.119923       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:02:25.205068       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:02:25.205221       1 server_linux.go:132] "Using iptables Proxier"
	I1009 20:02:25.209897       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:02:25.210317       1 server.go:527] "Version info" version="v1.34.1"
	I1009 20:02:25.210525       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:02:25.212038       1 config.go:200] "Starting service config controller"
	I1009 20:02:25.212059       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 20:02:25.212077       1 config.go:106] "Starting endpoint slice config controller"
	I1009 20:02:25.212082       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 20:02:25.212109       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 20:02:25.212120       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 20:02:25.212761       1 config.go:309] "Starting node config controller"
	I1009 20:02:25.212780       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 20:02:25.212787       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 20:02:25.313801       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 20:02:25.313835       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 20:02:25.313879       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5f2a4c1ed909bfd58b69d5787042aa91a6c0d43e3eef176ba2274c648fad521a] <==
	E1009 20:02:16.545958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 20:02:16.546011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 20:02:16.546083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 20:02:16.546159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 20:02:16.546219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 20:02:16.546280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 20:02:16.546345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1009 20:02:16.546402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 20:02:16.546456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 20:02:16.546534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 20:02:16.546761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 20:02:16.546883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1009 20:02:16.546925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 20:02:16.546940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1009 20:02:17.419458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 20:02:17.446882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 20:02:17.505233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 20:02:17.532989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1009 20:02:18.084259       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:03:10.329864       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1009 20:03:10.329893       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1009 20:03:10.329914       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1009 20:03:10.329952       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:03:10.330109       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1009 20:03:10.330138       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [75d7f10be8c3e4bdde6c5890b28343819891d67d2e73eb06f38c47013ae3a3cb] <==
	I1009 20:03:22.091775       1 serving.go:386] Generated self-signed cert in-memory
	I1009 20:03:24.484580       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 20:03:24.484617       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:03:24.494684       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 20:03:24.494743       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 20:03:24.494800       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:03:24.494807       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:03:24.494841       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:03:24.494868       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:03:24.496192       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 20:03:24.496451       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 20:03:24.595699       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:03:24.595836       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 20:03:24.595940       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 20:03:18 pause-383163 kubelet[1291]: E1009 20:03:18.476314    1291 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-383163\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3e244fd3a5e69bdedf4fc7a419241dd5" pod="kube-system/kube-controller-manager-pause-383163"
	Oct 09 20:03:18 pause-383163 kubelet[1291]: E1009 20:03:18.476806    1291 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-2blxf\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2bcb5c94-b301-4db9-bcf2-5f6eba8b07c7" pod="kube-system/kindnet-2blxf"
	Oct 09 20:03:18 pause-383163 kubelet[1291]: E1009 20:03:18.477087    1291 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9k7j8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b521ebd5-2359-4c44-9357-f2ac6cdd9719" pod="kube-system/kube-proxy-9k7j8"
	Oct 09 20:03:18 pause-383163 kubelet[1291]: E1009 20:03:18.477343    1291 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-kj4l8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9347b1f1-06ba-4612-96e4-9f5e09ba2500" pod="kube-system/coredns-66bc5c9577-kj4l8"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.118300    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-383163\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="3dc83ccd270fb312848e6bb9a10a204a" pod="kube-system/kube-scheduler-pause-383163"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.119171    1291 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-383163\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.119427    1291 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-383163\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.278734    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-383163\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="3dcaaa6838efe81036d876fec785ce3f" pod="kube-system/kube-apiserver-pause-383163"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.294100    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-383163\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="739ecb3823ee6112ca137b686c87fc3b" pod="kube-system/etcd-pause-383163"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.296475    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-383163\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="3e244fd3a5e69bdedf4fc7a419241dd5" pod="kube-system/kube-controller-manager-pause-383163"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.299397    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-2blxf\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="2bcb5c94-b301-4db9-bcf2-5f6eba8b07c7" pod="kube-system/kindnet-2blxf"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.302495    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-9k7j8\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="b521ebd5-2359-4c44-9357-f2ac6cdd9719" pod="kube-system/kube-proxy-9k7j8"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.310269    1291 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 09 20:03:23 pause-383163 kubelet[1291]:         pods "coredns-66bc5c9577-kj4l8" is forbidden: User "system:node:pause-383163" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-383163' and this object
	Oct 09 20:03:23 pause-383163 kubelet[1291]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Oct 09 20:03:23 pause-383163 kubelet[1291]:  > podUID="9347b1f1-06ba-4612-96e4-9f5e09ba2500" pod="kube-system/coredns-66bc5c9577-kj4l8"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.318606    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-383163\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="3dcaaa6838efe81036d876fec785ce3f" pod="kube-system/kube-apiserver-pause-383163"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.319890    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-383163\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="739ecb3823ee6112ca137b686c87fc3b" pod="kube-system/etcd-pause-383163"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.323034    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-383163\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="3e244fd3a5e69bdedf4fc7a419241dd5" pod="kube-system/kube-controller-manager-pause-383163"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.324168    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-2blxf\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="2bcb5c94-b301-4db9-bcf2-5f6eba8b07c7" pod="kube-system/kindnet-2blxf"
	Oct 09 20:03:23 pause-383163 kubelet[1291]: E1009 20:03:23.326198    1291 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-9k7j8\" is forbidden: User \"system:node:pause-383163\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-383163' and this object" podUID="b521ebd5-2359-4c44-9357-f2ac6cdd9719" pod="kube-system/kube-proxy-9k7j8"
	Oct 09 20:03:29 pause-383163 kubelet[1291]: W1009 20:03:29.486903    1291 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 09 20:03:35 pause-383163 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 20:03:35 pause-383163 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 20:03:35 pause-383163 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-383163 -n pause-383163
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-383163 -n pause-383163: exit status 2 (492.333556ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-383163 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-670649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-670649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (298.591236ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:16:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-670649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-670649 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-670649 describe deploy/metrics-server -n kube-system: exit status 1 (86.227611ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-670649 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-670649
helpers_test.go:243: (dbg) docker inspect old-k8s-version-670649:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d",
	        "Created": "2025-10-09T20:15:15.014520334Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 474621,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:15:15.111943409Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/hostname",
	        "HostsPath": "/var/lib/docker/containers/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/hosts",
	        "LogPath": "/var/lib/docker/containers/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d-json.log",
	        "Name": "/old-k8s-version-670649",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-670649:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-670649",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d",
	                "LowerDir": "/var/lib/docker/overlay2/27381f7e732d8c7d661645d8c8bce4a7b4487d7ccc8446c8ec75884f80dfc2aa-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/27381f7e732d8c7d661645d8c8bce4a7b4487d7ccc8446c8ec75884f80dfc2aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/27381f7e732d8c7d661645d8c8bce4a7b4487d7ccc8446c8ec75884f80dfc2aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/27381f7e732d8c7d661645d8c8bce4a7b4487d7ccc8446c8ec75884f80dfc2aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-670649",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-670649/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-670649",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-670649",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-670649",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aa0ce7d08fbd7b0dc958a037488c98dd756c6386a1447f8e5664ba7c57fcc1ef",
	            "SandboxKey": "/var/run/docker/netns/aa0ce7d08fbd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-670649": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:43:01:7a:1a:5e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f71ce8c90e918d3740f414c21f48298da6003535f949f572c810d48866acbdf",
	                    "EndpointID": "e813b0d8b8fdb56ef7fcdfcfea252427feab5d911fce533504703a740f0414e6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-670649",
	                        "242f5a73bf34"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-670649 -n old-k8s-version-670649
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-670649 logs -n 25
E1009 20:16:21.973949  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-670649 logs -n 25: (2.039335041s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-535911 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo containerd config dump                                                                                                                                                                                                  │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo crio config                                                                                                                                                                                                             │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ delete  │ -p cilium-535911                                                                                                                                                                                                                              │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │ 09 Oct 25 20:05 UTC │
	│ start   │ -p force-systemd-env-242564 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-242564  │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ force-systemd-flag-736218 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-736218 │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	│ delete  │ -p force-systemd-flag-736218                                                                                                                                                                                                                  │ force-systemd-flag-736218 │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	│ start   │ -p cert-expiration-282540 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-282540    │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	│ delete  │ -p force-systemd-env-242564                                                                                                                                                                                                                   │ force-systemd-env-242564  │ jenkins │ v1.37.0 │ 09 Oct 25 20:14 UTC │ 09 Oct 25 20:14 UTC │
	│ start   │ -p cert-options-038875 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-038875       │ jenkins │ v1.37.0 │ 09 Oct 25 20:14 UTC │ 09 Oct 25 20:15 UTC │
	│ ssh     │ cert-options-038875 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-038875       │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ ssh     │ -p cert-options-038875 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-038875       │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ delete  │ -p cert-options-038875                                                                                                                                                                                                                        │ cert-options-038875       │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ start   │ -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p cert-expiration-282540 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-282540    │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:16 UTC │
	│ delete  │ -p cert-expiration-282540                                                                                                                                                                                                                     │ cert-expiration-282540    │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020313         │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-670649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:16:10
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:16:10.390482  478299 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:16:10.390605  478299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:16:10.390617  478299 out.go:374] Setting ErrFile to fd 2...
	I1009 20:16:10.390622  478299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:16:10.390895  478299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:16:10.391306  478299 out.go:368] Setting JSON to false
	I1009 20:16:10.392214  478299 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10710,"bootTime":1760030261,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:16:10.392299  478299 start.go:143] virtualization:  
	I1009 20:16:10.395106  478299 out.go:179] * [no-preload-020313] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:16:10.398050  478299 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:16:10.398264  478299 notify.go:221] Checking for updates...
	I1009 20:16:10.403357  478299 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:16:10.406184  478299 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:16:10.408725  478299 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:16:10.411363  478299 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:16:10.413982  478299 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:16:10.417569  478299 config.go:182] Loaded profile config "old-k8s-version-670649": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1009 20:16:10.417732  478299 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:16:10.449015  478299 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:16:10.449265  478299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:16:10.523188  478299 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:16:10.512455249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:16:10.523298  478299 docker.go:319] overlay module found
	I1009 20:16:10.526398  478299 out.go:179] * Using the docker driver based on user configuration
	I1009 20:16:09.301052  474235 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-670649" is "Ready"
	I1009 20:16:09.301079  474235 pod_ready.go:86] duration metric: took 261.555018ms for pod "kube-controller-manager-old-k8s-version-670649" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:16:09.501221  474235 pod_ready.go:83] waiting for pod "kube-proxy-fffc5" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:16:09.900634  474235 pod_ready.go:94] pod "kube-proxy-fffc5" is "Ready"
	I1009 20:16:09.900657  474235 pod_ready.go:86] duration metric: took 399.412084ms for pod "kube-proxy-fffc5" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:16:10.102856  474235 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-670649" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:16:10.500875  474235 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-670649" is "Ready"
	I1009 20:16:10.500901  474235 pod_ready.go:86] duration metric: took 398.019887ms for pod "kube-scheduler-old-k8s-version-670649" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:16:10.500916  474235 pod_ready.go:40] duration metric: took 2.004729935s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:16:10.592385  474235 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1009 20:16:10.595585  474235 out.go:203] 
	W1009 20:16:10.598293  474235 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1009 20:16:10.601891  474235 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1009 20:16:10.605421  474235 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-670649" cluster and "default" namespace by default
	I1009 20:16:10.529187  478299 start.go:309] selected driver: docker
	I1009 20:16:10.529227  478299 start.go:930] validating driver "docker" against <nil>
	I1009 20:16:10.529241  478299 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:16:10.530048  478299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:16:10.615315  478299 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:16:10.601438003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:16:10.615501  478299 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 20:16:10.615748  478299 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:16:10.618822  478299 out.go:179] * Using Docker driver with root privileges
	I1009 20:16:10.621583  478299 cni.go:84] Creating CNI manager for ""
	I1009 20:16:10.621681  478299 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:16:10.621698  478299 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 20:16:10.621798  478299 start.go:353] cluster config:
	{Name:no-preload-020313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:16:10.624488  478299 out.go:179] * Starting "no-preload-020313" primary control-plane node in "no-preload-020313" cluster
	I1009 20:16:10.627343  478299 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:16:10.630371  478299 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:16:10.638625  478299 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:16:10.638777  478299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/config.json ...
	I1009 20:16:10.638817  478299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/config.json: {Name:mkabfbea4431ef015a7ff4563bca61d4c51150bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:10.639023  478299 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:16:10.639313  478299 cache.go:107] acquiring lock: {Name:mk067853efdb9d5dfe210e9bdb60a1140d344bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:16:10.639386  478299 cache.go:115] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1009 20:16:10.639395  478299 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 92.03µs
	I1009 20:16:10.639409  478299 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1009 20:16:10.639421  478299 cache.go:107] acquiring lock: {Name:mk9525a25fb678d6580f1eb602de12141a8b59a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:16:10.639498  478299 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1009 20:16:10.640054  478299 cache.go:107] acquiring lock: {Name:mk65f6488cbc08e9947528f7f60d66925e264a10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:16:10.640153  478299 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1009 20:16:10.640483  478299 cache.go:107] acquiring lock: {Name:mkef8cd450b6ec8be1600cd17c6da55958b25391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:16:10.640493  478299 cache.go:107] acquiring lock: {Name:mkd5d0f835b5a82fe0ea91a553ed69cdedb24993 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:16:10.640578  478299 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1009 20:16:10.640623  478299 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1009 20:16:10.641081  478299 cache.go:107] acquiring lock: {Name:mk549023c9da29243b6f2f23c58ca3df426147a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:16:10.641288  478299 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1009 20:16:10.642916  478299 cache.go:107] acquiring lock: {Name:mkac1bf7d8d221e16de37f34c6c9a23b671148bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:16:10.643102  478299 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1009 20:16:10.643281  478299 cache.go:107] acquiring lock: {Name:mkd217de9f557eca101e9a8593531ca54ad0485b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:16:10.644524  478299 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1009 20:16:10.644781  478299 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1009 20:16:10.646155  478299 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1009 20:16:10.646985  478299 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1009 20:16:10.647332  478299 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1009 20:16:10.647547  478299 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1009 20:16:10.647696  478299 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1009 20:16:10.648593  478299 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1009 20:16:10.694245  478299 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:16:10.694270  478299 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:16:10.694288  478299 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:16:10.694312  478299 start.go:361] acquireMachinesLock for no-preload-020313: {Name:mkd16c652d3af42b77740f1793cec5d9870abaca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:16:10.695227  478299 start.go:365] duration metric: took 891.788µs to acquireMachinesLock for "no-preload-020313"
	I1009 20:16:10.695274  478299 start.go:94] Provisioning new machine with config: &{Name:no-preload-020313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020313 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:16:10.695377  478299 start.go:126] createHost starting for "" (driver="docker")
	I1009 20:16:10.699217  478299 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 20:16:10.699524  478299 start.go:160] libmachine.API.Create for "no-preload-020313" (driver="docker")
	I1009 20:16:10.699582  478299 client.go:168] LocalClient.Create starting
	I1009 20:16:10.699681  478299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem
	I1009 20:16:10.699722  478299 main.go:141] libmachine: Decoding PEM data...
	I1009 20:16:10.699744  478299 main.go:141] libmachine: Parsing certificate...
	I1009 20:16:10.699818  478299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem
	I1009 20:16:10.699849  478299 main.go:141] libmachine: Decoding PEM data...
	I1009 20:16:10.699863  478299 main.go:141] libmachine: Parsing certificate...
	I1009 20:16:10.700498  478299 cli_runner.go:164] Run: docker network inspect no-preload-020313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 20:16:10.727913  478299 cli_runner.go:211] docker network inspect no-preload-020313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 20:16:10.728007  478299 network_create.go:284] running [docker network inspect no-preload-020313] to gather additional debugging logs...
	I1009 20:16:10.728024  478299 cli_runner.go:164] Run: docker network inspect no-preload-020313
	W1009 20:16:10.772931  478299 cli_runner.go:211] docker network inspect no-preload-020313 returned with exit code 1
	I1009 20:16:10.772961  478299 network_create.go:287] error running [docker network inspect no-preload-020313]: docker network inspect no-preload-020313: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-020313 not found
	I1009 20:16:10.772975  478299 network_create.go:289] output of [docker network inspect no-preload-020313]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-020313 not found
	
	** /stderr **
	I1009 20:16:10.773078  478299 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:16:10.810106  478299 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3847a6577684 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:b5:e6:7d:c7:ad} reservation:<nil>}
	I1009 20:16:10.810763  478299 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5742e12e0dad IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9e:82:91:fd:a6:fb} reservation:<nil>}
	I1009 20:16:10.811092  478299 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-11b099636187 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:bb:e5:1b:6d:a2} reservation:<nil>}
	I1009 20:16:10.811718  478299 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9f71ce8c90e9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:08:8e:ef:bc:3c} reservation:<nil>}
	I1009 20:16:10.812381  478299 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001dafbf0}
	I1009 20:16:10.812417  478299 network_create.go:124] attempt to create docker network no-preload-020313 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1009 20:16:10.815059  478299 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-020313 no-preload-020313
	I1009 20:16:10.883545  478299 network_create.go:108] docker network no-preload-020313 192.168.85.0/24 created
	I1009 20:16:10.883577  478299 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-020313" container
	I1009 20:16:10.883770  478299 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 20:16:10.911013  478299 cli_runner.go:164] Run: docker volume create no-preload-020313 --label name.minikube.sigs.k8s.io=no-preload-020313 --label created_by.minikube.sigs.k8s.io=true
	I1009 20:16:10.939225  478299 oci.go:103] Successfully created a docker volume no-preload-020313
	I1009 20:16:10.939380  478299 cli_runner.go:164] Run: docker run --rm --name no-preload-020313-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-020313 --entrypoint /usr/bin/test -v no-preload-020313:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 20:16:11.006037  478299 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1009 20:16:11.008511  478299 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1009 20:16:11.034844  478299 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1009 20:16:11.045479  478299 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1009 20:16:11.046071  478299 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1009 20:16:11.051896  478299 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1009 20:16:11.053598  478299 cache.go:162] opening:  /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1009 20:16:11.116502  478299 cache.go:157] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1009 20:16:11.116583  478299 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 473.676704ms
	I1009 20:16:11.116614  478299 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1009 20:16:11.659460  478299 cache.go:157] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1009 20:16:11.659535  478299 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 1.018557363s
	I1009 20:16:11.659563  478299 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1009 20:16:11.676997  478299 oci.go:107] Successfully prepared a docker volume no-preload-020313
	I1009 20:16:11.677041  478299 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1009 20:16:11.677231  478299 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 20:16:11.677398  478299 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 20:16:11.746909  478299 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-020313 --name no-preload-020313 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-020313 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-020313 --network no-preload-020313 --ip 192.168.85.2 --volume no-preload-020313:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 20:16:12.143860  478299 cache.go:157] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1009 20:16:12.143936  478299 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.50344895s
	I1009 20:16:12.143966  478299 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1009 20:16:12.147400  478299 cache.go:157] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1009 20:16:12.147516  478299 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.507035392s
	I1009 20:16:12.147528  478299 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1009 20:16:12.177236  478299 cache.go:157] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1009 20:16:12.177332  478299 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.537271703s
	I1009 20:16:12.177360  478299 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1009 20:16:12.185572  478299 cache.go:157] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1009 20:16:12.185646  478299 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.546223682s
	I1009 20:16:12.185675  478299 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1009 20:16:12.222472  478299 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Running}}
	I1009 20:16:12.259051  478299 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Status}}
	I1009 20:16:12.291449  478299 cli_runner.go:164] Run: docker exec no-preload-020313 stat /var/lib/dpkg/alternatives/iptables
	I1009 20:16:12.362883  478299 oci.go:144] the created container "no-preload-020313" has a running status.
	I1009 20:16:12.362915  478299 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/no-preload-020313/id_rsa...
	I1009 20:16:13.166416  478299 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-294150/.minikube/machines/no-preload-020313/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 20:16:13.188977  478299 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Status}}
	I1009 20:16:13.209555  478299 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 20:16:13.209577  478299 kic_runner.go:114] Args: [docker exec --privileged no-preload-020313 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 20:16:13.283420  478299 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Status}}
	I1009 20:16:13.303395  478299 machine.go:93] provisionDockerMachine start ...
	I1009 20:16:13.303492  478299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:16:13.324014  478299 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:13.324411  478299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1009 20:16:13.324431  478299 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:16:13.325067  478299 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43530->127.0.0.1:33421: read: connection reset by peer
	I1009 20:16:13.518632  478299 cache.go:157] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1009 20:16:13.518663  478299 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.8753847s
	I1009 20:16:13.518675  478299 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1009 20:16:13.518699  478299 cache.go:87] Successfully saved all images to host disk.
	I1009 20:16:16.480504  478299 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-020313
	
	I1009 20:16:16.480590  478299 ubuntu.go:182] provisioning hostname "no-preload-020313"
	I1009 20:16:16.480675  478299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:16:16.498836  478299 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:16.499148  478299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1009 20:16:16.499171  478299 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-020313 && echo "no-preload-020313" | sudo tee /etc/hostname
	I1009 20:16:16.659833  478299 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-020313
	
	I1009 20:16:16.659912  478299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:16:16.680378  478299 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:16.680711  478299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1009 20:16:16.680822  478299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-020313' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-020313/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-020313' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:16:16.825528  478299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:16:16.825554  478299 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:16:16.825587  478299 ubuntu.go:190] setting up certificates
	I1009 20:16:16.825597  478299 provision.go:84] configureAuth start
	I1009 20:16:16.825659  478299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-020313
	I1009 20:16:16.843329  478299 provision.go:143] copyHostCerts
	I1009 20:16:16.843397  478299 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:16:16.843406  478299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:16:16.843485  478299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:16:16.843571  478299 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:16:16.843576  478299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:16:16.843601  478299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:16:16.843651  478299 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:16:16.843655  478299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:16:16.843677  478299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:16:16.843721  478299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.no-preload-020313 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-020313]
	I1009 20:16:17.960195  478299 provision.go:177] copyRemoteCerts
	I1009 20:16:17.960269  478299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:16:17.960321  478299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:16:17.979018  478299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/no-preload-020313/id_rsa Username:docker}
	I1009 20:16:18.086967  478299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:16:18.107251  478299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:16:18.127041  478299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:16:18.151879  478299 provision.go:87] duration metric: took 1.326258599s to configureAuth
	I1009 20:16:18.151908  478299 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:16:18.152101  478299 config.go:182] Loaded profile config "no-preload-020313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:16:18.152210  478299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:16:18.170019  478299 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:18.170363  478299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1009 20:16:18.170380  478299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:16:18.435777  478299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:16:18.435799  478299 machine.go:96] duration metric: took 5.132381194s to provisionDockerMachine
	I1009 20:16:18.435818  478299 client.go:171] duration metric: took 7.736224969s to LocalClient.Create
	I1009 20:16:18.435835  478299 start.go:168] duration metric: took 7.736312815s to libmachine.API.Create "no-preload-020313"
	I1009 20:16:18.435843  478299 start.go:294] postStartSetup for "no-preload-020313" (driver="docker")
	I1009 20:16:18.435854  478299 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:16:18.435943  478299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:16:18.436005  478299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:16:18.453249  478299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/no-preload-020313/id_rsa Username:docker}
	I1009 20:16:18.558270  478299 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:16:18.561912  478299 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:16:18.561948  478299 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:16:18.561961  478299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:16:18.562016  478299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:16:18.562107  478299 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:16:18.562227  478299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:16:18.570124  478299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:16:18.589324  478299 start.go:297] duration metric: took 153.464171ms for postStartSetup
	I1009 20:16:18.589728  478299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-020313
	I1009 20:16:18.607622  478299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/config.json ...
	I1009 20:16:18.607918  478299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:16:18.607978  478299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:16:18.625382  478299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/no-preload-020313/id_rsa Username:docker}
	I1009 20:16:18.726907  478299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:16:18.731843  478299 start.go:129] duration metric: took 8.036449344s to createHost
	I1009 20:16:18.731869  478299 start.go:84] releasing machines lock for "no-preload-020313", held for 8.036619111s
	I1009 20:16:18.731943  478299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-020313
	I1009 20:16:18.750169  478299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:16:18.750250  478299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:16:18.750426  478299 ssh_runner.go:195] Run: cat /version.json
	I1009 20:16:18.750465  478299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:16:18.771523  478299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/no-preload-020313/id_rsa Username:docker}
	I1009 20:16:18.781236  478299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/no-preload-020313/id_rsa Username:docker}
	I1009 20:16:18.880752  478299 ssh_runner.go:195] Run: systemctl --version
	I1009 20:16:18.995326  478299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:16:19.037499  478299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:16:19.042184  478299 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:16:19.042255  478299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:16:19.071981  478299 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 20:16:19.072007  478299 start.go:496] detecting cgroup driver to use...
	I1009 20:16:19.072042  478299 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 20:16:19.072092  478299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:16:19.093188  478299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:16:19.108052  478299 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:16:19.108123  478299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:16:19.126947  478299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:16:19.151039  478299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:16:19.293271  478299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:16:19.427216  478299 docker.go:234] disabling docker service ...
	I1009 20:16:19.427283  478299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:16:19.452666  478299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:16:19.467280  478299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:16:19.583797  478299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:16:19.705192  478299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:16:19.719410  478299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:16:19.735831  478299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:16:19.735944  478299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:19.745742  478299 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:16:19.745838  478299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:19.755615  478299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:19.767479  478299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:19.778894  478299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:16:19.788423  478299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:19.798356  478299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:19.814653  478299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:19.823928  478299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:16:19.832071  478299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:16:19.840170  478299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:16:19.955577  478299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:16:20.135845  478299 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:16:20.135916  478299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:16:20.148344  478299 start.go:564] Will wait 60s for crictl version
	I1009 20:16:20.148425  478299 ssh_runner.go:195] Run: which crictl
	I1009 20:16:20.152739  478299 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:16:20.215964  478299 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:16:20.216058  478299 ssh_runner.go:195] Run: crio --version
	I1009 20:16:20.266618  478299 ssh_runner.go:195] Run: crio --version
	I1009 20:16:20.321258  478299 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 20:16:20.324092  478299 cli_runner.go:164] Run: docker network inspect no-preload-020313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:16:20.343456  478299 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 20:16:20.348013  478299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:16:20.359995  478299 kubeadm.go:883] updating cluster {Name:no-preload-020313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:16:20.360106  478299 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:16:20.360149  478299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:16:20.388535  478299 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1009 20:16:20.388572  478299 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 20:16:20.388619  478299 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:16:20.388849  478299 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1009 20:16:20.388958  478299 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1009 20:16:20.389057  478299 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1009 20:16:20.389261  478299 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1009 20:16:20.389297  478299 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1009 20:16:20.389409  478299 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1009 20:16:20.389460  478299 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	
	
	==> CRI-O <==
	Oct 09 20:16:07 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:07.915921988Z" level=info msg="Created container 3f7eac65ee9bec00efdf2229bfc22db4d398dc894d337fcdec2da9590cb80825: kube-system/coredns-5dd5756b68-kz799/coredns" id=0f1bf24d-a82e-4fb3-a6f1-c162277be51c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:16:07 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:07.917694913Z" level=info msg="Starting container: 3f7eac65ee9bec00efdf2229bfc22db4d398dc894d337fcdec2da9590cb80825" id=8f703950-5983-419e-bb4d-e93699174f73 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:16:07 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:07.925386549Z" level=info msg="Started container" PID=1913 containerID=3f7eac65ee9bec00efdf2229bfc22db4d398dc894d337fcdec2da9590cb80825 description=kube-system/coredns-5dd5756b68-kz799/coredns id=8f703950-5983-419e-bb4d-e93699174f73 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6b3a793a6503d72991b26a16ca9982d29bbe19886c021cf8b045d345e46f01bb
	Oct 09 20:16:11 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:11.223232011Z" level=info msg="Running pod sandbox: default/busybox/POD" id=88990be0-c3c4-4d9a-b5d7-12d1b42b5c64 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:16:11 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:11.223315901Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:16:11 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:11.23580971Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f5eeaf57c6f0f274a1c2706910f42eea6998e7ce8d24956bf4778072cb0a0e21 UID:0b2ec2bb-1c4e-4a74-9583-a369e03ce9b9 NetNS:/var/run/netns/357838ea-cab3-4f5b-9c29-94d93339c941 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40016162f0}] Aliases:map[]}"
	Oct 09 20:16:11 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:11.235859278Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 09 20:16:11 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:11.267372219Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f5eeaf57c6f0f274a1c2706910f42eea6998e7ce8d24956bf4778072cb0a0e21 UID:0b2ec2bb-1c4e-4a74-9583-a369e03ce9b9 NetNS:/var/run/netns/357838ea-cab3-4f5b-9c29-94d93339c941 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40016162f0}] Aliases:map[]}"
	Oct 09 20:16:11 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:11.267580214Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 09 20:16:11 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:11.281884421Z" level=info msg="Ran pod sandbox f5eeaf57c6f0f274a1c2706910f42eea6998e7ce8d24956bf4778072cb0a0e21 with infra container: default/busybox/POD" id=88990be0-c3c4-4d9a-b5d7-12d1b42b5c64 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:16:11 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:11.303293399Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9bca4ae4-216b-4052-a1a4-25ab0b81acbc name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:16:11 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:11.306623944Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9bca4ae4-216b-4052-a1a4-25ab0b81acbc name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:16:11 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:11.307223247Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=9bca4ae4-216b-4052-a1a4-25ab0b81acbc name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:16:11 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:11.320313035Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=553bcad7-a965-45c7-92fd-4b508599adf4 name=/runtime.v1.ImageService/PullImage
	Oct 09 20:16:11 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:11.329011661Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 09 20:16:13 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:13.388712769Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=553bcad7-a965-45c7-92fd-4b508599adf4 name=/runtime.v1.ImageService/PullImage
	Oct 09 20:16:13 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:13.391994903Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9c7b98f1-9355-4385-bb04-c954db9adb29 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:16:13 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:13.394216893Z" level=info msg="Creating container: default/busybox/busybox" id=3190009c-6373-4fc5-9f81-0786ce76e747 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:16:13 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:13.395032955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:16:13 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:13.399809437Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:16:13 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:13.400302498Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:16:13 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:13.415414167Z" level=info msg="Created container 621eefa0d66cdab8bb0f7ea5998b9350e98b46c1a7c1dc596347817be85fb032: default/busybox/busybox" id=3190009c-6373-4fc5-9f81-0786ce76e747 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:16:13 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:13.416455799Z" level=info msg="Starting container: 621eefa0d66cdab8bb0f7ea5998b9350e98b46c1a7c1dc596347817be85fb032" id=30b9727c-b22a-4006-8de4-26c69229ef9b name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:16:13 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:13.418433289Z" level=info msg="Started container" PID=1974 containerID=621eefa0d66cdab8bb0f7ea5998b9350e98b46c1a7c1dc596347817be85fb032 description=default/busybox/busybox id=30b9727c-b22a-4006-8de4-26c69229ef9b name=/runtime.v1.RuntimeService/StartContainer sandboxID=f5eeaf57c6f0f274a1c2706910f42eea6998e7ce8d24956bf4778072cb0a0e21
	Oct 09 20:16:20 old-k8s-version-670649 crio[838]: time="2025-10-09T20:16:20.117060594Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	621eefa0d66cd       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   f5eeaf57c6f0f       busybox                                          default
	3f7eac65ee9be       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      14 seconds ago      Running             coredns                   0                   6b3a793a6503d       coredns-5dd5756b68-kz799                         kube-system
	ef807f21eaed3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago      Running             storage-provisioner       0                   d42cb874bf3ba       storage-provisioner                              kube-system
	0e3f5709e422c       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   1527c5e141b3c       kindnet-4nzl2                                    kube-system
	433767dec021d       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      28 seconds ago      Running             kube-proxy                0                   b2ebc9804b424       kube-proxy-fffc5                                 kube-system
	17559c188db68       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   7acef95421611       etcd-old-k8s-version-670649                      kube-system
	f7695a41f2f13       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   0d57655f2926d       kube-scheduler-old-k8s-version-670649            kube-system
	951be45b8bae7       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   8ba8f497bc203       kube-apiserver-old-k8s-version-670649            kube-system
	46e3d7c78ed88       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   f5209ddcb05bc       kube-controller-manager-old-k8s-version-670649   kube-system
	
	
	==> coredns [3f7eac65ee9bec00efdf2229bfc22db4d398dc894d337fcdec2da9590cb80825] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38068 - 38612 "HINFO IN 5014003633257837025.71477023206875712. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.004959632s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-670649
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-670649
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=old-k8s-version-670649
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T20_15_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 20:15:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-670649
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 20:16:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 20:16:11 +0000   Thu, 09 Oct 2025 20:15:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 20:16:11 +0000   Thu, 09 Oct 2025 20:15:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 20:16:11 +0000   Thu, 09 Oct 2025 20:15:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 20:16:11 +0000   Thu, 09 Oct 2025 20:16:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-670649
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ea56a0306394c0ab6a84c1c22a683db
	  System UUID:                d2088d50-dda3-441d-a1ce-e5d6a3366421
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-kz799                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-670649                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-4nzl2                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-670649             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-670649    200m (10%)    0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-proxy-fffc5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-670649             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 49s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node old-k8s-version-670649 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node old-k8s-version-670649 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node old-k8s-version-670649 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s                kubelet          Node old-k8s-version-670649 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet          Node old-k8s-version-670649 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet          Node old-k8s-version-670649 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-670649 event: Registered Node old-k8s-version-670649 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-670649 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 19:40] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:45] overlayfs: idmapped layers are currently not supported
	[ +36.012100] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:47] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:50] overlayfs: idmapped layers are currently not supported
	[ +27.967875] overlayfs: idmapped layers are currently not supported
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:04] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:15] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [17559c188db68c7357747d52af886cecee8335bbd615d71709c2b611b440bfdc] <==
	{"level":"info","ts":"2025-10-09T20:15:34.101411Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-09T20:15:34.10206Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-09T20:15:34.102107Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-09T20:15:34.101723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-09T20:15:34.102417Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-09T20:15:34.107011Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-09T20:15:34.107141Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-09T20:15:35.057179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-09T20:15:35.057337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-09T20:15:35.057381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-10-09T20:15:35.057442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-10-09T20:15:35.057475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-09T20:15:35.05753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-10-09T20:15:35.057564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-09T20:15:35.061299Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T20:15:35.065349Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-670649 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-09T20:15:35.065433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-09T20:15:35.06654Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-09T20:15:35.066842Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-09T20:15:35.072264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-09T20:15:35.072909Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T20:15:35.073194Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T20:15:35.073281Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T20:15:35.081523Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-09T20:15:35.081657Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:16:22 up  2:58,  0 user,  load average: 1.70, 1.21, 1.45
	Linux old-k8s-version-670649 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0e3f5709e422cbb6b6e7e8fa647766e4d7ed1e6aa5691638403348f22235a832] <==
	I1009 20:15:56.507447       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:15:56.508544       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1009 20:15:56.508756       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:15:56.508779       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:15:56.508798       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:15:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:15:56.711432       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:15:56.801168       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:15:56.801273       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:15:56.802265       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1009 20:15:57.101774       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 20:15:57.101935       1 metrics.go:72] Registering metrics
	I1009 20:15:57.102016       1 controller.go:711] "Syncing nftables rules"
	I1009 20:16:06.714519       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 20:16:06.714644       1 main.go:301] handling current node
	I1009 20:16:16.716552       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 20:16:16.716642       1 main.go:301] handling current node
	
	
	==> kube-apiserver [951be45b8bae72438088b0092d9f5488ec1402922c34d94770bcb709d9d7fb0c] <==
	I1009 20:15:37.404490       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1009 20:15:37.404796       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1009 20:15:37.405263       1 shared_informer.go:318] Caches are synced for configmaps
	I1009 20:15:37.406486       1 controller.go:624] quota admission added evaluator for: namespaces
	I1009 20:15:37.406796       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1009 20:15:37.406910       1 aggregator.go:166] initial CRD sync complete...
	I1009 20:15:37.406944       1 autoregister_controller.go:141] Starting autoregister controller
	I1009 20:15:37.406977       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 20:15:37.407004       1 cache.go:39] Caches are synced for autoregister controller
	I1009 20:15:37.607027       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 20:15:38.210987       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1009 20:15:38.215799       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1009 20:15:38.215826       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:15:38.952987       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:15:39.021468       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:15:39.139971       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1009 20:15:39.146587       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1009 20:15:39.147733       1 controller.go:624] quota admission added evaluator for: endpoints
	I1009 20:15:39.152455       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 20:15:39.351722       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1009 20:15:40.591584       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1009 20:15:40.607512       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1009 20:15:40.629897       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1009 20:15:53.067469       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1009 20:15:53.145836       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [46e3d7c78ed8857f30f9f187f8d22d32ba9ac598d4012d7edae4a370af2ab05a] <==
	I1009 20:15:52.769922       1 shared_informer.go:318] Caches are synced for garbage collector
	I1009 20:15:52.800499       1 shared_informer.go:318] Caches are synced for garbage collector
	I1009 20:15:52.800547       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1009 20:15:53.073876       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1009 20:15:53.218091       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4nzl2"
	I1009 20:15:53.218117       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fffc5"
	I1009 20:15:53.338387       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-d7f6d"
	I1009 20:15:53.400109       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-kz799"
	I1009 20:15:53.444128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="370.455054ms"
	I1009 20:15:53.468956       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="24.755404ms"
	I1009 20:15:53.523682       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.673791ms"
	I1009 20:15:53.523804       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.952µs"
	I1009 20:15:54.740163       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1009 20:15:54.791963       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-d7f6d"
	I1009 20:15:54.818041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.888141ms"
	I1009 20:15:54.845309       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.208053ms"
	I1009 20:15:54.845457       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="108.293µs"
	I1009 20:16:07.151444       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.074µs"
	I1009 20:16:07.225511       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.539µs"
	I1009 20:16:07.233334       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I1009 20:16:07.233440       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-kz799" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-kz799"
	I1009 20:16:07.237881       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1009 20:16:08.034118       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="141.705µs"
	I1009 20:16:08.997536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.968006ms"
	I1009 20:16:08.997750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.02µs"
	
	
	==> kube-proxy [433767dec021d4750046cb2d4db3f856e835f4fa49750c9d189469a21961bc18] <==
	I1009 20:15:53.758716       1 server_others.go:69] "Using iptables proxy"
	I1009 20:15:53.809377       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1009 20:15:53.844168       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:15:53.847400       1 server_others.go:152] "Using iptables Proxier"
	I1009 20:15:53.847440       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1009 20:15:53.847448       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1009 20:15:53.847472       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1009 20:15:53.847687       1 server.go:846] "Version info" version="v1.28.0"
	I1009 20:15:53.847698       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:15:53.849061       1 config.go:188] "Starting service config controller"
	I1009 20:15:53.849074       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1009 20:15:53.849092       1 config.go:97] "Starting endpoint slice config controller"
	I1009 20:15:53.849095       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1009 20:15:53.855072       1 config.go:315] "Starting node config controller"
	I1009 20:15:53.855088       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1009 20:15:53.950431       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1009 20:15:53.950483       1 shared_informer.go:318] Caches are synced for service config
	I1009 20:15:53.955382       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f7695a41f2f13689e4db6c1fd9f18ff07ea8b3923fe56bf46938ee00e429559e] <==
	W1009 20:15:37.379204       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 20:15:37.379266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1009 20:15:37.380527       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 20:15:37.380562       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1009 20:15:38.221932       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1009 20:15:38.222098       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1009 20:15:38.309264       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1009 20:15:38.309362       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1009 20:15:38.311722       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 20:15:38.311809       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1009 20:15:38.356958       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1009 20:15:38.357059       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1009 20:15:38.377717       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 20:15:38.377750       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1009 20:15:38.450610       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1009 20:15:38.450704       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1009 20:15:38.492923       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 20:15:38.493043       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 20:15:38.544860       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 20:15:38.544915       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1009 20:15:38.604358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 20:15:38.604400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1009 20:15:38.625990       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 20:15:38.626128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1009 20:15:41.068180       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 20:15:53 old-k8s-version-670649 kubelet[1362]: I1009 20:15:53.283386    1362 topology_manager.go:215] "Topology Admit Handler" podUID="38f23811-b6c3-404d-a1bb-450efc1a88a8" podNamespace="kube-system" podName="kindnet-4nzl2"
	Oct 09 20:15:53 old-k8s-version-670649 kubelet[1362]: I1009 20:15:53.329461    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed72fb72-aba8-4b62-af33-fa5fe774504d-xtables-lock\") pod \"kube-proxy-fffc5\" (UID: \"ed72fb72-aba8-4b62-af33-fa5fe774504d\") " pod="kube-system/kube-proxy-fffc5"
	Oct 09 20:15:53 old-k8s-version-670649 kubelet[1362]: I1009 20:15:53.329647    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed72fb72-aba8-4b62-af33-fa5fe774504d-lib-modules\") pod \"kube-proxy-fffc5\" (UID: \"ed72fb72-aba8-4b62-af33-fa5fe774504d\") " pod="kube-system/kube-proxy-fffc5"
	Oct 09 20:15:53 old-k8s-version-670649 kubelet[1362]: I1009 20:15:53.329835    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29ntk\" (UniqueName: \"kubernetes.io/projected/38f23811-b6c3-404d-a1bb-450efc1a88a8-kube-api-access-29ntk\") pod \"kindnet-4nzl2\" (UID: \"38f23811-b6c3-404d-a1bb-450efc1a88a8\") " pod="kube-system/kindnet-4nzl2"
	Oct 09 20:15:53 old-k8s-version-670649 kubelet[1362]: I1009 20:15:53.329948    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ed72fb72-aba8-4b62-af33-fa5fe774504d-kube-proxy\") pod \"kube-proxy-fffc5\" (UID: \"ed72fb72-aba8-4b62-af33-fa5fe774504d\") " pod="kube-system/kube-proxy-fffc5"
	Oct 09 20:15:53 old-k8s-version-670649 kubelet[1362]: I1009 20:15:53.330109    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38f23811-b6c3-404d-a1bb-450efc1a88a8-xtables-lock\") pod \"kindnet-4nzl2\" (UID: \"38f23811-b6c3-404d-a1bb-450efc1a88a8\") " pod="kube-system/kindnet-4nzl2"
	Oct 09 20:15:53 old-k8s-version-670649 kubelet[1362]: I1009 20:15:53.330244    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p5mf\" (UniqueName: \"kubernetes.io/projected/ed72fb72-aba8-4b62-af33-fa5fe774504d-kube-api-access-6p5mf\") pod \"kube-proxy-fffc5\" (UID: \"ed72fb72-aba8-4b62-af33-fa5fe774504d\") " pod="kube-system/kube-proxy-fffc5"
	Oct 09 20:15:53 old-k8s-version-670649 kubelet[1362]: I1009 20:15:53.330411    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/38f23811-b6c3-404d-a1bb-450efc1a88a8-cni-cfg\") pod \"kindnet-4nzl2\" (UID: \"38f23811-b6c3-404d-a1bb-450efc1a88a8\") " pod="kube-system/kindnet-4nzl2"
	Oct 09 20:15:53 old-k8s-version-670649 kubelet[1362]: I1009 20:15:53.330516    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38f23811-b6c3-404d-a1bb-450efc1a88a8-lib-modules\") pod \"kindnet-4nzl2\" (UID: \"38f23811-b6c3-404d-a1bb-450efc1a88a8\") " pod="kube-system/kindnet-4nzl2"
	Oct 09 20:15:53 old-k8s-version-670649 kubelet[1362]: W1009 20:15:53.609262    1362 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/crio-1527c5e141b3cbde43a8d9f485c814a3df295bf8b7e43217f10a40ceb3302b19 WatchSource:0}: Error finding container 1527c5e141b3cbde43a8d9f485c814a3df295bf8b7e43217f10a40ceb3302b19: Status 404 returned error can't find the container with id 1527c5e141b3cbde43a8d9f485c814a3df295bf8b7e43217f10a40ceb3302b19
	Oct 09 20:15:56 old-k8s-version-670649 kubelet[1362]: I1009 20:15:56.932867    1362 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fffc5" podStartSLOduration=3.932825688 podCreationTimestamp="2025-10-09 20:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:15:53.927899071 +0000 UTC m=+13.375969644" watchObservedRunningTime="2025-10-09 20:15:56.932825688 +0000 UTC m=+16.380896252"
	Oct 09 20:16:00 old-k8s-version-670649 kubelet[1362]: I1009 20:16:00.820421    1362 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-4nzl2" podStartSLOduration=5.025482773 podCreationTimestamp="2025-10-09 20:15:53 +0000 UTC" firstStartedPulling="2025-10-09 20:15:53.62061987 +0000 UTC m=+13.068690426" lastFinishedPulling="2025-10-09 20:15:56.415514409 +0000 UTC m=+15.863584965" observedRunningTime="2025-10-09 20:15:56.933827303 +0000 UTC m=+16.381897859" watchObservedRunningTime="2025-10-09 20:16:00.820377312 +0000 UTC m=+20.268447868"
	Oct 09 20:16:07 old-k8s-version-670649 kubelet[1362]: I1009 20:16:07.092967    1362 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 09 20:16:07 old-k8s-version-670649 kubelet[1362]: I1009 20:16:07.141179    1362 topology_manager.go:215] "Topology Admit Handler" podUID="7148e7df-c3a2-4e32-ab15-be142bc605da" podNamespace="kube-system" podName="storage-provisioner"
	Oct 09 20:16:07 old-k8s-version-670649 kubelet[1362]: I1009 20:16:07.150586    1362 topology_manager.go:215] "Topology Admit Handler" podUID="a5653f04-c5f7-41b0-842e-6bf0d39c87e4" podNamespace="kube-system" podName="coredns-5dd5756b68-kz799"
	Oct 09 20:16:07 old-k8s-version-670649 kubelet[1362]: I1009 20:16:07.342020    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5653f04-c5f7-41b0-842e-6bf0d39c87e4-config-volume\") pod \"coredns-5dd5756b68-kz799\" (UID: \"a5653f04-c5f7-41b0-842e-6bf0d39c87e4\") " pod="kube-system/coredns-5dd5756b68-kz799"
	Oct 09 20:16:07 old-k8s-version-670649 kubelet[1362]: I1009 20:16:07.342167    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7148e7df-c3a2-4e32-ab15-be142bc605da-tmp\") pod \"storage-provisioner\" (UID: \"7148e7df-c3a2-4e32-ab15-be142bc605da\") " pod="kube-system/storage-provisioner"
	Oct 09 20:16:07 old-k8s-version-670649 kubelet[1362]: I1009 20:16:07.342237    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnggs\" (UniqueName: \"kubernetes.io/projected/7148e7df-c3a2-4e32-ab15-be142bc605da-kube-api-access-qnggs\") pod \"storage-provisioner\" (UID: \"7148e7df-c3a2-4e32-ab15-be142bc605da\") " pod="kube-system/storage-provisioner"
	Oct 09 20:16:07 old-k8s-version-670649 kubelet[1362]: I1009 20:16:07.342273    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp8lv\" (UniqueName: \"kubernetes.io/projected/a5653f04-c5f7-41b0-842e-6bf0d39c87e4-kube-api-access-gp8lv\") pod \"coredns-5dd5756b68-kz799\" (UID: \"a5653f04-c5f7-41b0-842e-6bf0d39c87e4\") " pod="kube-system/coredns-5dd5756b68-kz799"
	Oct 09 20:16:07 old-k8s-version-670649 kubelet[1362]: W1009 20:16:07.761420    1362 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/crio-d42cb874bf3ba55ecb7601d03303917d8b1170e55b5dee1e9d4fd2f2947dbaf8 WatchSource:0}: Error finding container d42cb874bf3ba55ecb7601d03303917d8b1170e55b5dee1e9d4fd2f2947dbaf8: Status 404 returned error can't find the container with id d42cb874bf3ba55ecb7601d03303917d8b1170e55b5dee1e9d4fd2f2947dbaf8
	Oct 09 20:16:08 old-k8s-version-670649 kubelet[1362]: I1009 20:16:08.030705    1362 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.03065102 podCreationTimestamp="2025-10-09 20:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:16:08.008979965 +0000 UTC m=+27.457050521" watchObservedRunningTime="2025-10-09 20:16:08.03065102 +0000 UTC m=+27.478721584"
	Oct 09 20:16:08 old-k8s-version-670649 kubelet[1362]: I1009 20:16:08.984966    1362 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-kz799" podStartSLOduration=15.984912334 podCreationTimestamp="2025-10-09 20:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:16:08.032571877 +0000 UTC m=+27.480642458" watchObservedRunningTime="2025-10-09 20:16:08.984912334 +0000 UTC m=+28.432982890"
	Oct 09 20:16:10 old-k8s-version-670649 kubelet[1362]: I1009 20:16:10.921418    1362 topology_manager.go:215] "Topology Admit Handler" podUID="0b2ec2bb-1c4e-4a74-9583-a369e03ce9b9" podNamespace="default" podName="busybox"
	Oct 09 20:16:11 old-k8s-version-670649 kubelet[1362]: I1009 20:16:11.003099    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnvvs\" (UniqueName: \"kubernetes.io/projected/0b2ec2bb-1c4e-4a74-9583-a369e03ce9b9-kube-api-access-bnvvs\") pod \"busybox\" (UID: \"0b2ec2bb-1c4e-4a74-9583-a369e03ce9b9\") " pod="default/busybox"
	Oct 09 20:16:11 old-k8s-version-670649 kubelet[1362]: W1009 20:16:11.275946    1362 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/crio-f5eeaf57c6f0f274a1c2706910f42eea6998e7ce8d24956bf4778072cb0a0e21 WatchSource:0}: Error finding container f5eeaf57c6f0f274a1c2706910f42eea6998e7ce8d24956bf4778072cb0a0e21: Status 404 returned error can't find the container with id f5eeaf57c6f0f274a1c2706910f42eea6998e7ce8d24956bf4778072cb0a0e21
	
	
	==> storage-provisioner [ef807f21eaed3d98409d147488b637614220e96f4500845e8209241a18ceada2] <==
	I1009 20:16:07.891453       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:16:08.026063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:16:08.027177       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 20:16:08.055083       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:16:08.060188       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-670649_83b66bea-21c6-459a-8dc0-91f4724cfd97!
	I1009 20:16:08.067262       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1ca381e3-6bc7-4716-b803-0241acff8a2f", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-670649_83b66bea-21c6-459a-8dc0-91f4724cfd97 became leader
	I1009 20:16:08.160531       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-670649_83b66bea-21c6-459a-8dc0-91f4724cfd97!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-670649 -n old-k8s-version-670649
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-670649 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-020313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-020313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (315.369696ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:17:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-020313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-020313 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-020313 describe deploy/metrics-server -n kube-system: exit status 1 (84.224167ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-020313 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-020313
helpers_test.go:243: (dbg) docker inspect no-preload-020313:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861",
	        "Created": "2025-10-09T20:16:11.761091001Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 478634,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:16:11.863928806Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861/hosts",
	        "LogPath": "/var/lib/docker/containers/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861-json.log",
	        "Name": "/no-preload-020313",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-020313:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-020313",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861",
	                "LowerDir": "/var/lib/docker/overlay2/89e13088c213ea195f3949972cfac4cf35790514b34c96e6ac7e173e96264c21-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89e13088c213ea195f3949972cfac4cf35790514b34c96e6ac7e173e96264c21/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89e13088c213ea195f3949972cfac4cf35790514b34c96e6ac7e173e96264c21/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89e13088c213ea195f3949972cfac4cf35790514b34c96e6ac7e173e96264c21/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-020313",
	                "Source": "/var/lib/docker/volumes/no-preload-020313/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-020313",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-020313",
	                "name.minikube.sigs.k8s.io": "no-preload-020313",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "061365777f4b0cd0a969e3fc63421e4b2ca7bf7f9df4e21b964367ea38b4e6ba",
	            "SandboxKey": "/var/run/docker/netns/061365777f4b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-020313": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:e6:a8:7e:13:e5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e50c4d176bfa3eef4ff1ee9bca0047e351ec3aec36a4229f03c93ea4e9e653dd",
	                    "EndpointID": "e3a50360461fd1765bfe1e25184ae950492576ecbd52c65c5c7fa499c799ad9c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-020313",
	                        "5f4dc51ee851"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-020313 -n no-preload-020313
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-020313 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-020313 logs -n 25: (1.357852252s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-535911 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo containerd config dump                                                                                                                                                                                                  │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ -p cilium-535911 sudo crio config                                                                                                                                                                                                             │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ delete  │ -p cilium-535911                                                                                                                                                                                                                              │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │ 09 Oct 25 20:05 UTC │
	│ start   │ -p force-systemd-env-242564 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-242564  │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ force-systemd-flag-736218 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-736218 │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	│ delete  │ -p force-systemd-flag-736218                                                                                                                                                                                                                  │ force-systemd-flag-736218 │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	│ start   │ -p cert-expiration-282540 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-282540    │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	│ delete  │ -p force-systemd-env-242564                                                                                                                                                                                                                   │ force-systemd-env-242564  │ jenkins │ v1.37.0 │ 09 Oct 25 20:14 UTC │ 09 Oct 25 20:14 UTC │
	│ start   │ -p cert-options-038875 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-038875       │ jenkins │ v1.37.0 │ 09 Oct 25 20:14 UTC │ 09 Oct 25 20:15 UTC │
	│ ssh     │ cert-options-038875 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-038875       │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ ssh     │ -p cert-options-038875 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-038875       │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ delete  │ -p cert-options-038875                                                                                                                                                                                                                        │ cert-options-038875       │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ start   │ -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p cert-expiration-282540 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-282540    │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:16 UTC │
	│ delete  │ -p cert-expiration-282540                                                                                                                                                                                                                     │ cert-expiration-282540    │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020313         │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:17 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-670649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │                     │
	│ stop    │ -p old-k8s-version-670649 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-670649 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-020313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020313         │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:16:37
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:16:37.710093  481385 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:16:37.710381  481385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:16:37.710395  481385 out.go:374] Setting ErrFile to fd 2...
	I1009 20:16:37.710402  481385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:16:37.710670  481385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:16:37.711046  481385 out.go:368] Setting JSON to false
	I1009 20:16:37.711999  481385 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10737,"bootTime":1760030261,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:16:37.712067  481385 start.go:143] virtualization:  
	I1009 20:16:37.721679  481385 out.go:179] * [old-k8s-version-670649] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:16:37.724639  481385 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:16:37.724849  481385 notify.go:221] Checking for updates...
	I1009 20:16:37.731110  481385 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:16:37.734095  481385 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:16:37.736915  481385 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:16:37.739749  481385 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:16:37.742721  481385 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:16:37.746215  481385 config.go:182] Loaded profile config "old-k8s-version-670649": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1009 20:16:37.751254  481385 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1009 20:16:37.754130  481385 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:16:37.808617  481385 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:16:37.808740  481385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:16:37.908903  481385 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-09 20:16:37.899048535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:16:37.909012  481385 docker.go:319] overlay module found
	I1009 20:16:37.915019  481385 out.go:179] * Using the docker driver based on existing profile
	I1009 20:16:37.917811  481385 start.go:309] selected driver: docker
	I1009 20:16:37.917828  481385 start.go:930] validating driver "docker" against &{Name:old-k8s-version-670649 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670649 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:16:37.917947  481385 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:16:37.918620  481385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:16:38.019190  481385 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-09 20:16:38.008342828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:16:38.019546  481385 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:16:38.019580  481385 cni.go:84] Creating CNI manager for ""
	I1009 20:16:38.019642  481385 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:16:38.019689  481385 start.go:353] cluster config:
	{Name:old-k8s-version-670649 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670649 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:16:38.024688  481385 out.go:179] * Starting "old-k8s-version-670649" primary control-plane node in "old-k8s-version-670649" cluster
	I1009 20:16:38.027534  481385 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:16:38.030492  481385 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:16:38.033329  481385 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 20:16:38.033389  481385 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1009 20:16:38.033399  481385 cache.go:58] Caching tarball of preloaded images
	I1009 20:16:38.033484  481385 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 20:16:38.033493  481385 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1009 20:16:38.033615  481385 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/config.json ...
	I1009 20:16:38.033862  481385 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:16:38.066214  481385 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:16:38.066238  481385 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:16:38.066257  481385 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:16:38.066281  481385 start.go:361] acquireMachinesLock for old-k8s-version-670649: {Name:mk748e355adc7f7fcc263c9edc4cd8976e687fe3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:16:38.066353  481385 start.go:365] duration metric: took 49.034µs to acquireMachinesLock for "old-k8s-version-670649"
	I1009 20:16:38.066376  481385 start.go:97] Skipping create...Using existing machine configuration
	I1009 20:16:38.066386  481385 fix.go:55] fixHost starting: 
	I1009 20:16:38.066677  481385 cli_runner.go:164] Run: docker container inspect old-k8s-version-670649 --format={{.State.Status}}
	I1009 20:16:38.093250  481385 fix.go:113] recreateIfNeeded on old-k8s-version-670649: state=Stopped err=<nil>
	W1009 20:16:38.093279  481385 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 20:16:35.844260  478299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:16:35.853223  478299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 20:16:35.874276  478299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:16:35.890337  478299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1009 20:16:35.904526  478299 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:16:35.908084  478299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:16:35.919517  478299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:16:36.060396  478299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:16:36.078921  478299 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313 for IP: 192.168.85.2
	I1009 20:16:36.078956  478299 certs.go:195] generating shared ca certs ...
	I1009 20:16:36.078979  478299 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:36.079149  478299 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:16:36.079204  478299 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:16:36.079216  478299 certs.go:257] generating profile certs ...
	I1009 20:16:36.079288  478299 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.key
	I1009 20:16:36.079304  478299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt with IP's: []
	I1009 20:16:36.327405  478299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt ...
	I1009 20:16:36.327440  478299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: {Name:mka53d3a23e7a51b8a9116d9e45469c9718c001a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:36.327637  478299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.key ...
	I1009 20:16:36.327650  478299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.key: {Name:mka07a8d67a309464c1fb565b33724bd232fd582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:36.327743  478299 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.key.ff7e88d0
	I1009 20:16:36.327761  478299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.crt.ff7e88d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1009 20:16:37.620535  478299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.crt.ff7e88d0 ...
	I1009 20:16:37.620586  478299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.crt.ff7e88d0: {Name:mk648ec8c250f17159e446a06f0f6895c4feddb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:37.620782  478299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.key.ff7e88d0 ...
	I1009 20:16:37.620821  478299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.key.ff7e88d0: {Name:mkf07ad45bd6973677e6670085092295e626da6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:37.620933  478299 certs.go:382] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.crt.ff7e88d0 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.crt
	I1009 20:16:37.621051  478299 certs.go:386] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.key.ff7e88d0 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.key
	I1009 20:16:37.621178  478299 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/proxy-client.key
	I1009 20:16:37.621225  478299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/proxy-client.crt with IP's: []
	I1009 20:16:38.294185  478299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/proxy-client.crt ...
	I1009 20:16:38.294270  478299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/proxy-client.crt: {Name:mkf2129e93cd3a98a0e3332a087e5a6cf4b14ef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:38.294500  478299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/proxy-client.key ...
	I1009 20:16:38.294539  478299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/proxy-client.key: {Name:mk1a6546e831cb284417c82662a13afb96767cb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:38.294798  478299 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:16:38.294868  478299 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:16:38.294893  478299 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:16:38.294949  478299 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:16:38.295005  478299 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:16:38.295068  478299 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:16:38.295142  478299 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:16:38.295775  478299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:16:38.316174  478299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:16:38.339054  478299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:16:38.362913  478299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:16:38.387511  478299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 20:16:38.411380  478299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:16:38.456058  478299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:16:38.490461  478299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:16:38.540668  478299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:16:38.569164  478299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:16:38.601485  478299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:16:38.643395  478299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:16:38.659847  478299 ssh_runner.go:195] Run: openssl version
	I1009 20:16:38.670982  478299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:16:38.682183  478299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:16:38.686629  478299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:16:38.686692  478299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:16:38.736526  478299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:16:38.746434  478299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:16:38.757975  478299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:38.764715  478299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:38.764839  478299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:38.817491  478299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:16:38.826197  478299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:16:38.834930  478299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:16:38.839650  478299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:16:38.839765  478299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:16:38.892278  478299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:16:38.905276  478299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:16:38.910025  478299 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 20:16:38.910127  478299 kubeadm.go:400] StartCluster: {Name:no-preload-020313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:16:38.910244  478299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:16:38.910328  478299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:16:38.964975  478299 cri.go:89] found id: ""
	I1009 20:16:38.965144  478299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:16:38.975816  478299 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:16:38.988204  478299 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 20:16:38.988322  478299 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:16:39.003605  478299 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:16:39.003682  478299 kubeadm.go:157] found existing configuration files:
	
	I1009 20:16:39.003776  478299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:16:39.015281  478299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:16:39.015415  478299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:16:39.024074  478299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:16:39.034758  478299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:16:39.034875  478299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:16:39.043577  478299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:16:39.053086  478299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:16:39.053250  478299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:16:39.062059  478299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:16:39.071294  478299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:16:39.071435  478299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:16:39.079903  478299 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 20:16:39.120584  478299 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 20:16:39.120889  478299 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 20:16:39.142200  478299 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 20:16:39.142377  478299 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 20:16:39.142431  478299 kubeadm.go:318] OS: Linux
	I1009 20:16:39.142489  478299 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 20:16:39.142548  478299 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 20:16:39.142614  478299 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 20:16:39.142706  478299 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 20:16:39.142798  478299 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 20:16:39.142892  478299 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 20:16:39.142972  478299 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 20:16:39.143056  478299 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 20:16:39.143139  478299 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 20:16:39.214677  478299 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:16:39.214806  478299 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:16:39.214926  478299 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:16:39.229563  478299 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:16:39.235724  478299 out.go:252]   - Generating certificates and keys ...
	I1009 20:16:39.235898  478299 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 20:16:39.235976  478299 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 20:16:38.096642  481385 out.go:252] * Restarting existing docker container for "old-k8s-version-670649" ...
	I1009 20:16:38.096732  481385 cli_runner.go:164] Run: docker start old-k8s-version-670649
	I1009 20:16:38.433033  481385 cli_runner.go:164] Run: docker container inspect old-k8s-version-670649 --format={{.State.Status}}
	I1009 20:16:38.463124  481385 kic.go:430] container "old-k8s-version-670649" state is running.
	I1009 20:16:38.463530  481385 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-670649
	I1009 20:16:38.490833  481385 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/config.json ...
	I1009 20:16:38.491058  481385 machine.go:93] provisionDockerMachine start ...
	I1009 20:16:38.491125  481385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670649
	I1009 20:16:38.517209  481385 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:38.517528  481385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1009 20:16:38.517537  481385 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:16:38.518207  481385 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50046->127.0.0.1:33426: read: connection reset by peer
	I1009 20:16:41.669690  481385 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-670649
	
	I1009 20:16:41.669775  481385 ubuntu.go:182] provisioning hostname "old-k8s-version-670649"
	I1009 20:16:41.669871  481385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670649
	I1009 20:16:41.693401  481385 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:41.693697  481385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1009 20:16:41.693709  481385 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-670649 && echo "old-k8s-version-670649" | sudo tee /etc/hostname
	I1009 20:16:41.880911  481385 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-670649
	
	I1009 20:16:41.881017  481385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670649
	I1009 20:16:41.909273  481385 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:41.909659  481385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1009 20:16:41.909682  481385 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-670649' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-670649/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-670649' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:16:42.070411  481385 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:16:42.070446  481385 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:16:42.070467  481385 ubuntu.go:190] setting up certificates
	I1009 20:16:42.070478  481385 provision.go:84] configureAuth start
	I1009 20:16:42.070562  481385 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-670649
	I1009 20:16:42.099140  481385 provision.go:143] copyHostCerts
	I1009 20:16:42.099217  481385 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:16:42.099237  481385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:16:42.099323  481385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:16:42.099428  481385 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:16:42.099433  481385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:16:42.099460  481385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:16:42.099517  481385 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:16:42.099527  481385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:16:42.099553  481385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:16:42.099600  481385 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-670649 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-670649]
	I1009 20:16:42.323273  481385 provision.go:177] copyRemoteCerts
	I1009 20:16:42.323350  481385 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:16:42.323405  481385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670649
	I1009 20:16:42.342494  481385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/old-k8s-version-670649/id_rsa Username:docker}
	I1009 20:16:42.445850  481385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:16:42.469850  481385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 20:16:42.492172  481385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:16:42.518001  481385 provision.go:87] duration metric: took 447.498511ms to configureAuth
	I1009 20:16:42.518032  481385 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:16:42.518274  481385 config.go:182] Loaded profile config "old-k8s-version-670649": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1009 20:16:42.518415  481385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670649
	I1009 20:16:42.546789  481385 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:42.547158  481385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33426 <nil> <nil>}
	I1009 20:16:42.547181  481385 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:16:42.896471  481385 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:16:42.896509  481385 machine.go:96] duration metric: took 4.405436818s to provisionDockerMachine
	I1009 20:16:42.896521  481385 start.go:294] postStartSetup for "old-k8s-version-670649" (driver="docker")
	I1009 20:16:42.896532  481385 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:16:42.896609  481385 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:16:42.896664  481385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670649
	I1009 20:16:42.922963  481385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/old-k8s-version-670649/id_rsa Username:docker}
	I1009 20:16:43.030412  481385 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:16:43.034477  481385 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:16:43.034503  481385 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:16:43.034515  481385 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:16:43.034568  481385 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:16:43.034651  481385 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:16:43.034763  481385 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:16:43.043418  481385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:16:43.064038  481385 start.go:297] duration metric: took 167.50025ms for postStartSetup
	I1009 20:16:43.064187  481385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:16:43.064264  481385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670649
	I1009 20:16:43.083289  481385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/old-k8s-version-670649/id_rsa Username:docker}
	I1009 20:16:43.182548  481385 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:16:43.187934  481385 fix.go:57] duration metric: took 5.121542159s for fixHost
	I1009 20:16:43.187956  481385 start.go:84] releasing machines lock for "old-k8s-version-670649", held for 5.121591997s
	I1009 20:16:43.188042  481385 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-670649
	I1009 20:16:43.205619  481385 ssh_runner.go:195] Run: cat /version.json
	I1009 20:16:43.205676  481385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670649
	I1009 20:16:43.205927  481385 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:16:43.205979  481385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670649
	I1009 20:16:43.245229  481385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/old-k8s-version-670649/id_rsa Username:docker}
	I1009 20:16:43.245655  481385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/old-k8s-version-670649/id_rsa Username:docker}
	I1009 20:16:43.447867  481385 ssh_runner.go:195] Run: systemctl --version
	I1009 20:16:43.455120  481385 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:16:43.496305  481385 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:16:43.501876  481385 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:16:43.501961  481385 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:16:43.511250  481385 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 20:16:43.511275  481385 start.go:496] detecting cgroup driver to use...
	I1009 20:16:43.511308  481385 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 20:16:43.511359  481385 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:16:43.529278  481385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:16:43.544523  481385 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:16:43.544594  481385 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:16:43.561731  481385 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:16:43.576582  481385 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:16:43.723843  481385 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:16:43.863924  481385 docker.go:234] disabling docker service ...
	I1009 20:16:43.863993  481385 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:16:43.882084  481385 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:16:43.897088  481385 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:16:44.074366  481385 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:16:44.231044  481385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:16:44.246617  481385 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:16:44.268051  481385 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1009 20:16:44.268129  481385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:44.278262  481385 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:16:44.278371  481385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:44.288020  481385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:44.297745  481385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:44.307388  481385 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:16:44.317844  481385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:44.327740  481385 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:44.339533  481385 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:44.349803  481385 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:16:44.359072  481385 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:16:44.366610  481385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:16:44.521645  481385 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:16:44.679751  481385 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:16:44.679846  481385 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:16:44.684098  481385 start.go:564] Will wait 60s for crictl version
	I1009 20:16:44.684208  481385 ssh_runner.go:195] Run: which crictl
	I1009 20:16:44.688286  481385 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:16:44.718649  481385 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:16:44.718774  481385 ssh_runner.go:195] Run: crio --version
	I1009 20:16:44.755993  481385 ssh_runner.go:195] Run: crio --version
	I1009 20:16:44.796369  481385 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1009 20:16:41.080691  478299 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 20:16:41.207685  478299 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 20:16:41.689901  478299 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 20:16:41.996700  478299 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 20:16:42.861703  478299 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 20:16:42.862260  478299 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-020313] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 20:16:43.135394  478299 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 20:16:43.135938  478299 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-020313] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 20:16:43.531547  478299 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 20:16:43.880256  478299 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 20:16:44.268711  478299 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 20:16:44.269065  478299 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:16:44.906718  478299 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:16:44.799303  481385 cli_runner.go:164] Run: docker network inspect old-k8s-version-670649 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:16:44.818898  481385 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 20:16:44.823168  481385 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:16:44.832954  481385 kubeadm.go:883] updating cluster {Name:old-k8s-version-670649 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670649 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:16:44.833055  481385 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 20:16:44.833139  481385 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:16:44.875337  481385 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:16:44.875358  481385 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:16:44.875443  481385 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:16:44.902424  481385 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:16:44.902493  481385 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:16:44.902530  481385 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1009 20:16:44.902667  481385 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-670649 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670649 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:16:44.902799  481385 ssh_runner.go:195] Run: crio config
	I1009 20:16:44.988911  481385 cni.go:84] Creating CNI manager for ""
	I1009 20:16:44.988938  481385 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:16:44.988984  481385 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 20:16:44.989016  481385 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-670649 NodeName:old-k8s-version-670649 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:16:44.989252  481385 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-670649"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:16:44.989369  481385 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1009 20:16:44.997602  481385 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:16:44.997730  481385 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:16:45.006618  481385 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1009 20:16:45.031433  481385 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:16:45.054555  481385 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1009 20:16:45.072396  481385 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:16:45.079593  481385 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:16:45.096009  481385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:16:45.276703  481385 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:16:45.298696  481385 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649 for IP: 192.168.76.2
	I1009 20:16:45.298724  481385 certs.go:195] generating shared ca certs ...
	I1009 20:16:45.298748  481385 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:45.298992  481385 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:16:45.299093  481385 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:16:45.299140  481385 certs.go:257] generating profile certs ...
	I1009 20:16:45.299296  481385 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.key
	I1009 20:16:45.299423  481385 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/apiserver.key.dd9fd387
	I1009 20:16:45.299507  481385 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/proxy-client.key
	I1009 20:16:45.299698  481385 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:16:45.299761  481385 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:16:45.299778  481385 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:16:45.299839  481385 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:16:45.299914  481385 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:16:45.299954  481385 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:16:45.300037  481385 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:16:45.301348  481385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:16:45.353524  481385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:16:45.449476  481385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:16:45.522963  481385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:16:45.614293  481385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 20:16:45.646410  481385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:16:45.667247  481385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:16:45.689751  481385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:16:45.710720  481385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:16:45.746413  481385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:16:45.766736  481385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:16:45.787021  481385 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:16:45.802331  481385 ssh_runner.go:195] Run: openssl version
	I1009 20:16:45.810346  481385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:16:45.821791  481385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:45.827837  481385 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:45.827955  481385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:45.872198  481385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:16:45.881692  481385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:16:45.891566  481385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:16:45.896458  481385 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:16:45.896559  481385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:16:45.940276  481385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:16:45.949886  481385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:16:45.959355  481385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:16:45.963827  481385 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:16:45.963937  481385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:16:46.017781  481385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:16:46.038224  481385 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:16:46.043417  481385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:16:46.118747  481385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:16:46.214079  481385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:16:46.290300  481385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:16:46.366193  481385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:16:46.479031  481385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:16:46.593503  481385 kubeadm.go:400] StartCluster: {Name:old-k8s-version-670649 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-670649 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:16:46.593603  481385 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:16:46.593730  481385 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:16:46.678442  481385 cri.go:89] found id: "35d9539ca66deeeb7d54f441e4e9faa5be578cf1ccc3e88b622d80472a21a3aa"
	I1009 20:16:46.678473  481385 cri.go:89] found id: "2dbcc3dbc3674682da2fd59a5223bfdbb8dff89e8f24cc4606b92f04b8486139"
	I1009 20:16:46.678478  481385 cri.go:89] found id: "dbb7cba7d5da37c54a17588da4d76f5d70497f3e32a6e495c433bd46fb90292a"
	I1009 20:16:46.678490  481385 cri.go:89] found id: "c615280026154e697494663ebf653ff25eb7cef14b02ea4bc2dce85a23e792fd"
	I1009 20:16:46.678511  481385 cri.go:89] found id: ""
	I1009 20:16:46.678583  481385 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 20:16:46.693209  481385 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:16:46Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:16:46.693323  481385 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:16:46.720652  481385 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 20:16:46.720675  481385 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 20:16:46.720759  481385 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:16:46.749931  481385 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:16:46.750445  481385 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-670649" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:16:46.750594  481385 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-294150/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-670649" cluster setting kubeconfig missing "old-k8s-version-670649" context setting]
	I1009 20:16:46.750938  481385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:46.752663  481385 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:16:46.771345  481385 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1009 20:16:46.771381  481385 kubeadm.go:601] duration metric: took 50.699568ms to restartPrimaryControlPlane
	I1009 20:16:46.771391  481385 kubeadm.go:402] duration metric: took 177.899732ms to StartCluster
	I1009 20:16:46.771436  481385 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:46.771516  481385 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:16:46.772198  481385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:46.772482  481385 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:16:46.772866  481385 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:16:46.772948  481385 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-670649"
	I1009 20:16:46.772966  481385 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-670649"
	W1009 20:16:46.772973  481385 addons.go:247] addon storage-provisioner should already be in state true
	I1009 20:16:46.773010  481385 host.go:66] Checking if "old-k8s-version-670649" exists ...
	I1009 20:16:46.773513  481385 cli_runner.go:164] Run: docker container inspect old-k8s-version-670649 --format={{.State.Status}}
	I1009 20:16:46.773875  481385 config.go:182] Loaded profile config "old-k8s-version-670649": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1009 20:16:46.773945  481385 addons.go:69] Setting dashboard=true in profile "old-k8s-version-670649"
	I1009 20:16:46.773960  481385 addons.go:238] Setting addon dashboard=true in "old-k8s-version-670649"
	W1009 20:16:46.773967  481385 addons.go:247] addon dashboard should already be in state true
	I1009 20:16:46.774003  481385 host.go:66] Checking if "old-k8s-version-670649" exists ...
	I1009 20:16:46.774460  481385 cli_runner.go:164] Run: docker container inspect old-k8s-version-670649 --format={{.State.Status}}
	I1009 20:16:46.774831  481385 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-670649"
	I1009 20:16:46.774876  481385 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-670649"
	I1009 20:16:46.775188  481385 cli_runner.go:164] Run: docker container inspect old-k8s-version-670649 --format={{.State.Status}}
	I1009 20:16:46.777704  481385 out.go:179] * Verifying Kubernetes components...
	I1009 20:16:46.784731  481385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:16:46.819316  481385 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-670649"
	W1009 20:16:46.819353  481385 addons.go:247] addon default-storageclass should already be in state true
	I1009 20:16:46.819380  481385 host.go:66] Checking if "old-k8s-version-670649" exists ...
	I1009 20:16:46.819818  481385 cli_runner.go:164] Run: docker container inspect old-k8s-version-670649 --format={{.State.Status}}
	I1009 20:16:46.840477  481385 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:16:46.843470  481385 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:16:46.843497  481385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:16:46.843572  481385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670649
	I1009 20:16:46.854009  481385 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 20:16:46.857311  481385 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 20:16:45.859523  478299 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:16:46.447074  478299 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:16:47.201543  478299 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:16:47.363542  478299 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:16:47.363663  478299 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:16:47.369474  478299 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:16:46.861772  481385 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 20:16:46.861799  481385 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 20:16:46.861871  481385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670649
	I1009 20:16:46.887868  481385 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:16:46.887889  481385 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:16:46.887957  481385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670649
	I1009 20:16:46.907088  481385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/old-k8s-version-670649/id_rsa Username:docker}
	I1009 20:16:46.918784  481385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/old-k8s-version-670649/id_rsa Username:docker}
	I1009 20:16:46.940371  481385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/old-k8s-version-670649/id_rsa Username:docker}
	I1009 20:16:47.261235  481385 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:16:47.290345  481385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:16:47.326624  481385 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-670649" to be "Ready" ...
	I1009 20:16:47.330738  481385 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 20:16:47.330805  481385 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 20:16:47.417008  481385 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 20:16:47.417080  481385 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 20:16:47.428393  481385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:16:47.494728  481385 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 20:16:47.494840  481385 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 20:16:47.594024  481385 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 20:16:47.594098  481385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 20:16:47.677617  481385 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 20:16:47.677721  481385 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 20:16:47.373061  478299 out.go:252]   - Booting up control plane ...
	I1009 20:16:47.373234  478299 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:16:47.373317  478299 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:16:47.373388  478299 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:16:47.398190  478299 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:16:47.398305  478299 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 20:16:47.409097  478299 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 20:16:47.409473  478299 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:16:47.409709  478299 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 20:16:47.642446  478299 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:16:47.642572  478299 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:16:49.643972  478299 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.00182725s
	I1009 20:16:49.647339  478299 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 20:16:49.647627  478299 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1009 20:16:49.647728  478299 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 20:16:49.647814  478299 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 20:16:47.801620  481385 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 20:16:47.801701  481385 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 20:16:47.860132  481385 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 20:16:47.860207  481385 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 20:16:47.883831  481385 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 20:16:47.883906  481385 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 20:16:47.908542  481385 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 20:16:47.908618  481385 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 20:16:47.934526  481385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 20:16:54.788417  481385 node_ready.go:49] node "old-k8s-version-670649" is "Ready"
	I1009 20:16:54.788451  481385 node_ready.go:38] duration metric: took 7.461741582s for node "old-k8s-version-670649" to be "Ready" ...
	I1009 20:16:54.788478  481385 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:16:54.788543  481385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.629147  481385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.338771703s)
	I1009 20:16:58.629210  481385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.200736878s)
	I1009 20:16:59.310730  481385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.376158923s)
	I1009 20:16:59.310881  481385 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.522321332s)
	I1009 20:16:59.310901  481385 api_server.go:72] duration metric: took 12.538388479s to wait for apiserver process to appear ...
	I1009 20:16:59.310908  481385 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:16:59.310928  481385 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:16:59.313729  481385 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-670649 addons enable metrics-server
	
	I1009 20:16:59.316588  481385 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1009 20:16:58.662293  478299 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 9.014455722s
	I1009 20:16:59.626636  478299 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 9.979222155s
	I1009 20:17:01.653447  478299 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 12.003119945s
	I1009 20:17:01.685340  478299 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 20:17:01.709868  478299 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 20:17:01.731122  478299 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 20:17:01.731347  478299 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-020313 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 20:17:01.745629  478299 kubeadm.go:318] [bootstrap-token] Using token: k1adms.d3m2bl7cchsl0gh7
	I1009 20:16:59.319457  481385 addons.go:514] duration metric: took 12.546579999s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1009 20:16:59.329942  481385 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1009 20:16:59.331355  481385 api_server.go:141] control plane version: v1.28.0
	I1009 20:16:59.331381  481385 api_server.go:131] duration metric: took 20.462896ms to wait for apiserver health ...
	I1009 20:16:59.331390  481385 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:16:59.343570  481385 system_pods.go:59] 8 kube-system pods found
	I1009 20:16:59.343613  481385 system_pods.go:61] "coredns-5dd5756b68-kz799" [a5653f04-c5f7-41b0-842e-6bf0d39c87e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:16:59.343623  481385 system_pods.go:61] "etcd-old-k8s-version-670649" [d11100e5-175e-4ba6-a3ff-319b8e06f201] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:16:59.343629  481385 system_pods.go:61] "kindnet-4nzl2" [38f23811-b6c3-404d-a1bb-450efc1a88a8] Running
	I1009 20:16:59.343637  481385 system_pods.go:61] "kube-apiserver-old-k8s-version-670649" [db8432c9-40a2-4e41-9128-1268899fd332] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:16:59.343644  481385 system_pods.go:61] "kube-controller-manager-old-k8s-version-670649" [e337503f-a675-4bf8-862e-9725f5366328] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:16:59.343649  481385 system_pods.go:61] "kube-proxy-fffc5" [ed72fb72-aba8-4b62-af33-fa5fe774504d] Running
	I1009 20:16:59.343658  481385 system_pods.go:61] "kube-scheduler-old-k8s-version-670649" [3ffbdb2a-14fa-4385-82b7-365ead9bfca1] Running
	I1009 20:16:59.343663  481385 system_pods.go:61] "storage-provisioner" [7148e7df-c3a2-4e32-ab15-be142bc605da] Running
	I1009 20:16:59.343671  481385 system_pods.go:74] duration metric: took 12.275461ms to wait for pod list to return data ...
	I1009 20:16:59.343678  481385 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:16:59.348027  481385 default_sa.go:45] found service account: "default"
	I1009 20:16:59.348052  481385 default_sa.go:55] duration metric: took 4.363075ms for default service account to be created ...
	I1009 20:16:59.348062  481385 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:16:59.354008  481385 system_pods.go:86] 8 kube-system pods found
	I1009 20:16:59.354039  481385 system_pods.go:89] "coredns-5dd5756b68-kz799" [a5653f04-c5f7-41b0-842e-6bf0d39c87e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:16:59.354061  481385 system_pods.go:89] "etcd-old-k8s-version-670649" [d11100e5-175e-4ba6-a3ff-319b8e06f201] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:16:59.354068  481385 system_pods.go:89] "kindnet-4nzl2" [38f23811-b6c3-404d-a1bb-450efc1a88a8] Running
	I1009 20:16:59.354076  481385 system_pods.go:89] "kube-apiserver-old-k8s-version-670649" [db8432c9-40a2-4e41-9128-1268899fd332] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:16:59.354090  481385 system_pods.go:89] "kube-controller-manager-old-k8s-version-670649" [e337503f-a675-4bf8-862e-9725f5366328] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:16:59.354096  481385 system_pods.go:89] "kube-proxy-fffc5" [ed72fb72-aba8-4b62-af33-fa5fe774504d] Running
	I1009 20:16:59.354104  481385 system_pods.go:89] "kube-scheduler-old-k8s-version-670649" [3ffbdb2a-14fa-4385-82b7-365ead9bfca1] Running
	I1009 20:16:59.354110  481385 system_pods.go:89] "storage-provisioner" [7148e7df-c3a2-4e32-ab15-be142bc605da] Running
	I1009 20:16:59.354121  481385 system_pods.go:126] duration metric: took 6.05416ms to wait for k8s-apps to be running ...
	I1009 20:16:59.354129  481385 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:16:59.354192  481385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:16:59.378380  481385 system_svc.go:56] duration metric: took 24.238113ms WaitForService to wait for kubelet
	I1009 20:16:59.378461  481385 kubeadm.go:586] duration metric: took 12.605946084s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:16:59.378512  481385 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:16:59.385263  481385 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:16:59.385348  481385 node_conditions.go:123] node cpu capacity is 2
	I1009 20:16:59.385376  481385 node_conditions.go:105] duration metric: took 6.843939ms to run NodePressure ...
	I1009 20:16:59.385423  481385 start.go:242] waiting for startup goroutines ...
	I1009 20:16:59.385452  481385 start.go:247] waiting for cluster config update ...
	I1009 20:16:59.385483  481385 start.go:256] writing updated cluster config ...
	I1009 20:16:59.385858  481385 ssh_runner.go:195] Run: rm -f paused
	I1009 20:16:59.390632  481385 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:16:59.396745  481385 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-kz799" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 20:17:01.404097  481385 pod_ready.go:104] pod "coredns-5dd5756b68-kz799" is not "Ready", error: <nil>
	I1009 20:17:01.748661  478299 out.go:252]   - Configuring RBAC rules ...
	I1009 20:17:01.748802  478299 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 20:17:01.756148  478299 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 20:17:01.769534  478299 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 20:17:01.777074  478299 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 20:17:01.782525  478299 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 20:17:01.787700  478299 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 20:17:02.059884  478299 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 20:17:02.527348  478299 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 20:17:03.059034  478299 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 20:17:03.060310  478299 kubeadm.go:318] 
	I1009 20:17:03.060399  478299 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 20:17:03.060411  478299 kubeadm.go:318] 
	I1009 20:17:03.060500  478299 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 20:17:03.060573  478299 kubeadm.go:318] 
	I1009 20:17:03.060606  478299 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 20:17:03.060692  478299 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 20:17:03.060749  478299 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 20:17:03.060754  478299 kubeadm.go:318] 
	I1009 20:17:03.060816  478299 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 20:17:03.060820  478299 kubeadm.go:318] 
	I1009 20:17:03.060870  478299 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 20:17:03.060875  478299 kubeadm.go:318] 
	I1009 20:17:03.060929  478299 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 20:17:03.061008  478299 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 20:17:03.061080  478299 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 20:17:03.061084  478299 kubeadm.go:318] 
	I1009 20:17:03.061211  478299 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 20:17:03.061294  478299 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 20:17:03.061299  478299 kubeadm.go:318] 
	I1009 20:17:03.061386  478299 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token k1adms.d3m2bl7cchsl0gh7 \
	I1009 20:17:03.061494  478299 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e766d16640f098061f552dd476e80ebd3809bd57b4957045222f32c55d34903e \
	I1009 20:17:03.061516  478299 kubeadm.go:318] 	--control-plane 
	I1009 20:17:03.061521  478299 kubeadm.go:318] 
	I1009 20:17:03.061623  478299 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 20:17:03.061628  478299 kubeadm.go:318] 
	I1009 20:17:03.061714  478299 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token k1adms.d3m2bl7cchsl0gh7 \
	I1009 20:17:03.061820  478299 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e766d16640f098061f552dd476e80ebd3809bd57b4957045222f32c55d34903e 
	I1009 20:17:03.065824  478299 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 20:17:03.066068  478299 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 20:17:03.066191  478299 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:17:03.066291  478299 cni.go:84] Creating CNI manager for ""
	I1009 20:17:03.066303  478299 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:17:03.070355  478299 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1009 20:17:03.073441  478299 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 20:17:03.079412  478299 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 20:17:03.079440  478299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 20:17:03.098284  478299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 20:17:03.473833  478299 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:17:03.473977  478299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:17:03.474043  478299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-020313 minikube.k8s.io/updated_at=2025_10_09T20_17_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb minikube.k8s.io/name=no-preload-020313 minikube.k8s.io/primary=true
	I1009 20:17:03.676606  478299 ops.go:34] apiserver oom_adj: -16
	I1009 20:17:03.676828  478299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:17:04.177799  478299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:17:04.676862  478299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:17:05.177643  478299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1009 20:17:03.904116  481385 pod_ready.go:104] pod "coredns-5dd5756b68-kz799" is not "Ready", error: <nil>
	W1009 20:17:06.403414  481385 pod_ready.go:104] pod "coredns-5dd5756b68-kz799" is not "Ready", error: <nil>
	I1009 20:17:05.677826  478299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:17:06.177241  478299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:17:06.677045  478299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:17:07.177652  478299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:17:07.676934  478299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:17:08.176992  478299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:17:08.309026  478299 kubeadm.go:1113] duration metric: took 4.835106229s to wait for elevateKubeSystemPrivileges
	I1009 20:17:08.309057  478299 kubeadm.go:402] duration metric: took 29.398934252s to StartCluster
	I1009 20:17:08.309080  478299 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:08.309173  478299 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:17:08.310166  478299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:08.310405  478299 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:17:08.310504  478299 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 20:17:08.310770  478299 config.go:182] Loaded profile config "no-preload-020313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:17:08.310811  478299 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:17:08.310885  478299 addons.go:69] Setting storage-provisioner=true in profile "no-preload-020313"
	I1009 20:17:08.310900  478299 addons.go:238] Setting addon storage-provisioner=true in "no-preload-020313"
	I1009 20:17:08.310924  478299 host.go:66] Checking if "no-preload-020313" exists ...
	I1009 20:17:08.311425  478299 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Status}}
	I1009 20:17:08.311945  478299 addons.go:69] Setting default-storageclass=true in profile "no-preload-020313"
	I1009 20:17:08.311968  478299 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-020313"
	I1009 20:17:08.312265  478299 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Status}}
	I1009 20:17:08.313967  478299 out.go:179] * Verifying Kubernetes components...
	I1009 20:17:08.320057  478299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:08.356451  478299 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:08.360323  478299 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:17:08.360346  478299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:17:08.360405  478299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:17:08.361185  478299 addons.go:238] Setting addon default-storageclass=true in "no-preload-020313"
	I1009 20:17:08.361220  478299 host.go:66] Checking if "no-preload-020313" exists ...
	I1009 20:17:08.361640  478299 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Status}}
	I1009 20:17:08.417336  478299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/no-preload-020313/id_rsa Username:docker}
	I1009 20:17:08.423903  478299 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:17:08.423924  478299 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:17:08.423992  478299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:17:08.459419  478299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/no-preload-020313/id_rsa Username:docker}
	I1009 20:17:08.708895  478299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:17:08.820136  478299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:17:08.867454  478299 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 20:17:08.867573  478299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:17:09.658300  478299 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1009 20:17:09.661388  478299 addons.go:514] duration metric: took 1.350556045s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1009 20:17:09.730665  478299 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1009 20:17:09.732536  478299 node_ready.go:35] waiting up to 6m0s for node "no-preload-020313" to be "Ready" ...
	I1009 20:17:10.238841  478299 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-020313" context rescaled to 1 replicas
	W1009 20:17:08.416727  481385 pod_ready.go:104] pod "coredns-5dd5756b68-kz799" is not "Ready", error: <nil>
	W1009 20:17:10.904024  481385 pod_ready.go:104] pod "coredns-5dd5756b68-kz799" is not "Ready", error: <nil>
	W1009 20:17:11.735786  478299 node_ready.go:57] node "no-preload-020313" has "Ready":"False" status (will retry)
	W1009 20:17:13.736575  478299 node_ready.go:57] node "no-preload-020313" has "Ready":"False" status (will retry)
	W1009 20:17:12.907880  481385 pod_ready.go:104] pod "coredns-5dd5756b68-kz799" is not "Ready", error: <nil>
	W1009 20:17:15.403024  481385 pod_ready.go:104] pod "coredns-5dd5756b68-kz799" is not "Ready", error: <nil>
	W1009 20:17:17.403407  481385 pod_ready.go:104] pod "coredns-5dd5756b68-kz799" is not "Ready", error: <nil>
	W1009 20:17:16.235249  478299 node_ready.go:57] node "no-preload-020313" has "Ready":"False" status (will retry)
	W1009 20:17:18.235603  478299 node_ready.go:57] node "no-preload-020313" has "Ready":"False" status (will retry)
	W1009 20:17:20.236147  478299 node_ready.go:57] node "no-preload-020313" has "Ready":"False" status (will retry)
	W1009 20:17:19.404386  481385 pod_ready.go:104] pod "coredns-5dd5756b68-kz799" is not "Ready", error: <nil>
	W1009 20:17:21.904961  481385 pod_ready.go:104] pod "coredns-5dd5756b68-kz799" is not "Ready", error: <nil>
	W1009 20:17:22.736784  478299 node_ready.go:57] node "no-preload-020313" has "Ready":"False" status (will retry)
	I1009 20:17:23.241009  478299 node_ready.go:49] node "no-preload-020313" is "Ready"
	I1009 20:17:23.241034  478299 node_ready.go:38] duration metric: took 13.508466832s for node "no-preload-020313" to be "Ready" ...
	I1009 20:17:23.241049  478299 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:17:23.241141  478299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:23.261535  478299 api_server.go:72] duration metric: took 14.951090851s to wait for apiserver process to appear ...
	I1009 20:17:23.261608  478299 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:17:23.261644  478299 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 20:17:23.271493  478299 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 20:17:23.272750  478299 api_server.go:141] control plane version: v1.34.1
	I1009 20:17:23.272809  478299 api_server.go:131] duration metric: took 11.179435ms to wait for apiserver health ...
	I1009 20:17:23.272834  478299 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:17:23.276627  478299 system_pods.go:59] 8 kube-system pods found
	I1009 20:17:23.276706  478299 system_pods.go:61] "coredns-66bc5c9577-h7jz6" [50ef033a-7db2-4326-a6d6-574c692f50ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:17:23.276730  478299 system_pods.go:61] "etcd-no-preload-020313" [ffe41bc4-bdd7-4da8-9781-364de0d17db9] Running
	I1009 20:17:23.276768  478299 system_pods.go:61] "kindnet-47kwl" [60a32ed3-a01b-47ee-9128-d0763b3502ee] Running
	I1009 20:17:23.276794  478299 system_pods.go:61] "kube-apiserver-no-preload-020313" [d8f0991e-2fdd-4635-b144-99bfccfc61c0] Running
	I1009 20:17:23.276820  478299 system_pods.go:61] "kube-controller-manager-no-preload-020313" [a14b0780-83e0-4076-9076-c673c69ee034] Running
	I1009 20:17:23.276845  478299 system_pods.go:61] "kube-proxy-cd5v6" [7843ebcc-c450-40f9-b0dd-6cb09dd70a81] Running
	I1009 20:17:23.276878  478299 system_pods.go:61] "kube-scheduler-no-preload-020313" [a3f3beaf-2476-4cc8-845c-e0230d0fb499] Running
	I1009 20:17:23.276904  478299 system_pods.go:61] "storage-provisioner" [03ca5595-692b-4e09-a599-439b385749c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:17:23.276925  478299 system_pods.go:74] duration metric: took 4.071575ms to wait for pod list to return data ...
	I1009 20:17:23.276949  478299 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:17:23.280908  478299 default_sa.go:45] found service account: "default"
	I1009 20:17:23.280968  478299 default_sa.go:55] duration metric: took 3.987602ms for default service account to be created ...
	I1009 20:17:23.280998  478299 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:17:23.283984  478299 system_pods.go:86] 8 kube-system pods found
	I1009 20:17:23.284056  478299 system_pods.go:89] "coredns-66bc5c9577-h7jz6" [50ef033a-7db2-4326-a6d6-574c692f50ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:17:23.284081  478299 system_pods.go:89] "etcd-no-preload-020313" [ffe41bc4-bdd7-4da8-9781-364de0d17db9] Running
	I1009 20:17:23.284127  478299 system_pods.go:89] "kindnet-47kwl" [60a32ed3-a01b-47ee-9128-d0763b3502ee] Running
	I1009 20:17:23.284152  478299 system_pods.go:89] "kube-apiserver-no-preload-020313" [d8f0991e-2fdd-4635-b144-99bfccfc61c0] Running
	I1009 20:17:23.284174  478299 system_pods.go:89] "kube-controller-manager-no-preload-020313" [a14b0780-83e0-4076-9076-c673c69ee034] Running
	I1009 20:17:23.284199  478299 system_pods.go:89] "kube-proxy-cd5v6" [7843ebcc-c450-40f9-b0dd-6cb09dd70a81] Running
	I1009 20:17:23.284232  478299 system_pods.go:89] "kube-scheduler-no-preload-020313" [a3f3beaf-2476-4cc8-845c-e0230d0fb499] Running
	I1009 20:17:23.284262  478299 system_pods.go:89] "storage-provisioner" [03ca5595-692b-4e09-a599-439b385749c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:17:23.284302  478299 retry.go:31] will retry after 239.476255ms: missing components: kube-dns
	I1009 20:17:23.528695  478299 system_pods.go:86] 8 kube-system pods found
	I1009 20:17:23.528740  478299 system_pods.go:89] "coredns-66bc5c9577-h7jz6" [50ef033a-7db2-4326-a6d6-574c692f50ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:17:23.528748  478299 system_pods.go:89] "etcd-no-preload-020313" [ffe41bc4-bdd7-4da8-9781-364de0d17db9] Running
	I1009 20:17:23.528754  478299 system_pods.go:89] "kindnet-47kwl" [60a32ed3-a01b-47ee-9128-d0763b3502ee] Running
	I1009 20:17:23.528759  478299 system_pods.go:89] "kube-apiserver-no-preload-020313" [d8f0991e-2fdd-4635-b144-99bfccfc61c0] Running
	I1009 20:17:23.528764  478299 system_pods.go:89] "kube-controller-manager-no-preload-020313" [a14b0780-83e0-4076-9076-c673c69ee034] Running
	I1009 20:17:23.528768  478299 system_pods.go:89] "kube-proxy-cd5v6" [7843ebcc-c450-40f9-b0dd-6cb09dd70a81] Running
	I1009 20:17:23.528772  478299 system_pods.go:89] "kube-scheduler-no-preload-020313" [a3f3beaf-2476-4cc8-845c-e0230d0fb499] Running
	I1009 20:17:23.528778  478299 system_pods.go:89] "storage-provisioner" [03ca5595-692b-4e09-a599-439b385749c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:17:23.528796  478299 retry.go:31] will retry after 345.618791ms: missing components: kube-dns
	I1009 20:17:23.878875  478299 system_pods.go:86] 8 kube-system pods found
	I1009 20:17:23.878910  478299 system_pods.go:89] "coredns-66bc5c9577-h7jz6" [50ef033a-7db2-4326-a6d6-574c692f50ba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:17:23.878919  478299 system_pods.go:89] "etcd-no-preload-020313" [ffe41bc4-bdd7-4da8-9781-364de0d17db9] Running
	I1009 20:17:23.878925  478299 system_pods.go:89] "kindnet-47kwl" [60a32ed3-a01b-47ee-9128-d0763b3502ee] Running
	I1009 20:17:23.878929  478299 system_pods.go:89] "kube-apiserver-no-preload-020313" [d8f0991e-2fdd-4635-b144-99bfccfc61c0] Running
	I1009 20:17:23.878934  478299 system_pods.go:89] "kube-controller-manager-no-preload-020313" [a14b0780-83e0-4076-9076-c673c69ee034] Running
	I1009 20:17:23.878938  478299 system_pods.go:89] "kube-proxy-cd5v6" [7843ebcc-c450-40f9-b0dd-6cb09dd70a81] Running
	I1009 20:17:23.878942  478299 system_pods.go:89] "kube-scheduler-no-preload-020313" [a3f3beaf-2476-4cc8-845c-e0230d0fb499] Running
	I1009 20:17:23.878946  478299 system_pods.go:89] "storage-provisioner" [03ca5595-692b-4e09-a599-439b385749c1] Running
	I1009 20:17:23.878954  478299 system_pods.go:126] duration metric: took 597.936904ms to wait for k8s-apps to be running ...
	I1009 20:17:23.878962  478299 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:17:23.879016  478299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:17:23.892851  478299 system_svc.go:56] duration metric: took 13.878551ms WaitForService to wait for kubelet
	I1009 20:17:23.892927  478299 kubeadm.go:586] duration metric: took 15.582488453s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:17:23.892960  478299 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:17:23.897221  478299 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:17:23.897309  478299 node_conditions.go:123] node cpu capacity is 2
	I1009 20:17:23.897337  478299 node_conditions.go:105] duration metric: took 4.357191ms to run NodePressure ...
	I1009 20:17:23.897379  478299 start.go:242] waiting for startup goroutines ...
	I1009 20:17:23.897405  478299 start.go:247] waiting for cluster config update ...
	I1009 20:17:23.897432  478299 start.go:256] writing updated cluster config ...
	I1009 20:17:23.897798  478299 ssh_runner.go:195] Run: rm -f paused
	I1009 20:17:23.903362  478299 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:17:23.907269  478299 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h7jz6" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:24.912625  478299 pod_ready.go:94] pod "coredns-66bc5c9577-h7jz6" is "Ready"
	I1009 20:17:24.912709  478299 pod_ready.go:86] duration metric: took 1.005408965s for pod "coredns-66bc5c9577-h7jz6" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:24.915690  478299 pod_ready.go:83] waiting for pod "etcd-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:24.920539  478299 pod_ready.go:94] pod "etcd-no-preload-020313" is "Ready"
	I1009 20:17:24.920565  478299 pod_ready.go:86] duration metric: took 4.852697ms for pod "etcd-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:24.923847  478299 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:24.928807  478299 pod_ready.go:94] pod "kube-apiserver-no-preload-020313" is "Ready"
	I1009 20:17:24.928833  478299 pod_ready.go:86] duration metric: took 4.962672ms for pod "kube-apiserver-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:24.932066  478299 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:25.110674  478299 pod_ready.go:94] pod "kube-controller-manager-no-preload-020313" is "Ready"
	I1009 20:17:25.110747  478299 pod_ready.go:86] duration metric: took 178.653833ms for pod "kube-controller-manager-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:25.311718  478299 pod_ready.go:83] waiting for pod "kube-proxy-cd5v6" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:25.710644  478299 pod_ready.go:94] pod "kube-proxy-cd5v6" is "Ready"
	I1009 20:17:25.710672  478299 pod_ready.go:86] duration metric: took 398.928817ms for pod "kube-proxy-cd5v6" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:25.911042  478299 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:26.311166  478299 pod_ready.go:94] pod "kube-scheduler-no-preload-020313" is "Ready"
	I1009 20:17:26.311198  478299 pod_ready.go:86] duration metric: took 400.129217ms for pod "kube-scheduler-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:26.311211  478299 pod_ready.go:40] duration metric: took 2.407816377s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:17:26.376365  478299 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:17:26.379546  478299 out.go:179] * Done! kubectl is now configured to use "no-preload-020313" cluster and "default" namespace by default
	W1009 20:17:24.402782  481385 pod_ready.go:104] pod "coredns-5dd5756b68-kz799" is not "Ready", error: <nil>
	W1009 20:17:26.410266  481385 pod_ready.go:104] pod "coredns-5dd5756b68-kz799" is not "Ready", error: <nil>
	W1009 20:17:28.902508  481385 pod_ready.go:104] pod "coredns-5dd5756b68-kz799" is not "Ready", error: <nil>
	W1009 20:17:30.903956  481385 pod_ready.go:104] pod "coredns-5dd5756b68-kz799" is not "Ready", error: <nil>
	W1009 20:17:33.402291  481385 pod_ready.go:104] pod "coredns-5dd5756b68-kz799" is not "Ready", error: <nil>
	W1009 20:17:35.403475  481385 pod_ready.go:104] pod "coredns-5dd5756b68-kz799" is not "Ready", error: <nil>
	I1009 20:17:35.903149  481385 pod_ready.go:94] pod "coredns-5dd5756b68-kz799" is "Ready"
	I1009 20:17:35.903179  481385 pod_ready.go:86] duration metric: took 36.506360372s for pod "coredns-5dd5756b68-kz799" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:35.906785  481385 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-670649" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:35.912125  481385 pod_ready.go:94] pod "etcd-old-k8s-version-670649" is "Ready"
	I1009 20:17:35.912155  481385 pod_ready.go:86] duration metric: took 5.342731ms for pod "etcd-old-k8s-version-670649" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:35.915550  481385 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-670649" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:35.921618  481385 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-670649" is "Ready"
	I1009 20:17:35.921649  481385 pod_ready.go:86] duration metric: took 6.070027ms for pod "kube-apiserver-old-k8s-version-670649" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:35.925198  481385 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-670649" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:36.100211  481385 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-670649" is "Ready"
	I1009 20:17:36.100239  481385 pod_ready.go:86] duration metric: took 175.01292ms for pod "kube-controller-manager-old-k8s-version-670649" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:36.301292  481385 pod_ready.go:83] waiting for pod "kube-proxy-fffc5" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:36.703693  481385 pod_ready.go:94] pod "kube-proxy-fffc5" is "Ready"
	I1009 20:17:36.703724  481385 pod_ready.go:86] duration metric: took 402.403545ms for pod "kube-proxy-fffc5" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:36.901823  481385 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-670649" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:37.303909  481385 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-670649" is "Ready"
	I1009 20:17:37.303941  481385 pod_ready.go:86] duration metric: took 402.093213ms for pod "kube-scheduler-old-k8s-version-670649" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:17:37.303956  481385 pod_ready.go:40] duration metric: took 37.91324293s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:17:37.385064  481385 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1009 20:17:37.388325  481385 out.go:203] 
	W1009 20:17:37.391484  481385 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1009 20:17:37.394503  481385 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1009 20:17:37.397717  481385 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-670649" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 20:17:23 no-preload-020313 crio[834]: time="2025-10-09T20:17:23.600463585Z" level=info msg="Created container 869d1e5acf9e9669e28f4680ae0ce8cc257836ed01b3c538ea7932a6f3d2bdd5: kube-system/coredns-66bc5c9577-h7jz6/coredns" id=b7c6daae-70f4-4d9f-b2be-033f8ae31de9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:17:23 no-preload-020313 crio[834]: time="2025-10-09T20:17:23.601369354Z" level=info msg="Starting container: 869d1e5acf9e9669e28f4680ae0ce8cc257836ed01b3c538ea7932a6f3d2bdd5" id=7a26308e-83a6-45a1-aef1-de9d06886a0f name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:17:23 no-preload-020313 crio[834]: time="2025-10-09T20:17:23.605389563Z" level=info msg="Started container" PID=2486 containerID=869d1e5acf9e9669e28f4680ae0ce8cc257836ed01b3c538ea7932a6f3d2bdd5 description=kube-system/coredns-66bc5c9577-h7jz6/coredns id=7a26308e-83a6-45a1-aef1-de9d06886a0f name=/runtime.v1.RuntimeService/StartContainer sandboxID=a67ecb0c4fd6d94e225abe48b46000c16dec488ccaa9ecee1d476d9c4eedcac8
	Oct 09 20:17:26 no-preload-020313 crio[834]: time="2025-10-09T20:17:26.899653957Z" level=info msg="Running pod sandbox: default/busybox/POD" id=78203e58-8542-4ed7-97fb-bf8a704591c9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:17:26 no-preload-020313 crio[834]: time="2025-10-09T20:17:26.899721888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:17:26 no-preload-020313 crio[834]: time="2025-10-09T20:17:26.904808993Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e84a7398e2191001f2a171c674520d431b7ffed36200924b12ac4774d3819181 UID:bee2655f-729a-4600-b1e4-939eef3e8e2b NetNS:/var/run/netns/68eddb9c-b964-4cd3-a7cd-eb0e46aa914c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40027723c8}] Aliases:map[]}"
	Oct 09 20:17:26 no-preload-020313 crio[834]: time="2025-10-09T20:17:26.904859176Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 09 20:17:26 no-preload-020313 crio[834]: time="2025-10-09T20:17:26.919653153Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e84a7398e2191001f2a171c674520d431b7ffed36200924b12ac4774d3819181 UID:bee2655f-729a-4600-b1e4-939eef3e8e2b NetNS:/var/run/netns/68eddb9c-b964-4cd3-a7cd-eb0e46aa914c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40027723c8}] Aliases:map[]}"
	Oct 09 20:17:26 no-preload-020313 crio[834]: time="2025-10-09T20:17:26.91981126Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 09 20:17:26 no-preload-020313 crio[834]: time="2025-10-09T20:17:26.922683523Z" level=info msg="Ran pod sandbox e84a7398e2191001f2a171c674520d431b7ffed36200924b12ac4774d3819181 with infra container: default/busybox/POD" id=78203e58-8542-4ed7-97fb-bf8a704591c9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:17:26 no-preload-020313 crio[834]: time="2025-10-09T20:17:26.925737934Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9988b7c2-4089-48fc-ba82-e3f261985e86 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:17:26 no-preload-020313 crio[834]: time="2025-10-09T20:17:26.925905304Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9988b7c2-4089-48fc-ba82-e3f261985e86 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:17:26 no-preload-020313 crio[834]: time="2025-10-09T20:17:26.925971955Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=9988b7c2-4089-48fc-ba82-e3f261985e86 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:17:26 no-preload-020313 crio[834]: time="2025-10-09T20:17:26.926649963Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0ec056f1-1efc-401a-9f60-7b1e52565c19 name=/runtime.v1.ImageService/PullImage
	Oct 09 20:17:26 no-preload-020313 crio[834]: time="2025-10-09T20:17:26.928114731Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 09 20:17:28 no-preload-020313 crio[834]: time="2025-10-09T20:17:28.837969085Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=0ec056f1-1efc-401a-9f60-7b1e52565c19 name=/runtime.v1.ImageService/PullImage
	Oct 09 20:17:28 no-preload-020313 crio[834]: time="2025-10-09T20:17:28.842157297Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6ab4dab4-9a0c-4ad5-8b59-e38ebad97800 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:17:28 no-preload-020313 crio[834]: time="2025-10-09T20:17:28.845729664Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=535f7980-6374-4cc7-bc2e-090b0a6b2556 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:17:28 no-preload-020313 crio[834]: time="2025-10-09T20:17:28.854672053Z" level=info msg="Creating container: default/busybox/busybox" id=a3649a7a-0292-46b3-b3ad-e057459412d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:17:28 no-preload-020313 crio[834]: time="2025-10-09T20:17:28.855811113Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:17:28 no-preload-020313 crio[834]: time="2025-10-09T20:17:28.860939146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:17:28 no-preload-020313 crio[834]: time="2025-10-09T20:17:28.861738271Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:17:28 no-preload-020313 crio[834]: time="2025-10-09T20:17:28.881793918Z" level=info msg="Created container 572aa416bf0bca7a8ad9c25aa2a59af1ac72468c91119fe04787fb48efe86c4e: default/busybox/busybox" id=a3649a7a-0292-46b3-b3ad-e057459412d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:17:28 no-preload-020313 crio[834]: time="2025-10-09T20:17:28.88355876Z" level=info msg="Starting container: 572aa416bf0bca7a8ad9c25aa2a59af1ac72468c91119fe04787fb48efe86c4e" id=ff35a0f1-215f-4ffd-a479-b518e63f7f41 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:17:28 no-preload-020313 crio[834]: time="2025-10-09T20:17:28.885406795Z" level=info msg="Started container" PID=2543 containerID=572aa416bf0bca7a8ad9c25aa2a59af1ac72468c91119fe04787fb48efe86c4e description=default/busybox/busybox id=ff35a0f1-215f-4ffd-a479-b518e63f7f41 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e84a7398e2191001f2a171c674520d431b7ffed36200924b12ac4774d3819181
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	572aa416bf0bc       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   e84a7398e2191       busybox                                     default
	869d1e5acf9e9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago      Running             coredns                   0                   a67ecb0c4fd6d       coredns-66bc5c9577-h7jz6                    kube-system
	3ce5637991450       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   b5ad714b99ea0       storage-provisioner                         kube-system
	ebc8c423df0f0       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    26 seconds ago      Running             kindnet-cni               0                   581e97efa4744       kindnet-47kwl                               kube-system
	613ef92d1443f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      29 seconds ago      Running             kube-proxy                0                   601031b43f586       kube-proxy-cd5v6                            kube-system
	d9d979fcfcfc5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      48 seconds ago      Running             kube-apiserver            0                   a1e48f9143662       kube-apiserver-no-preload-020313            kube-system
	fb09c7ed84ca5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      48 seconds ago      Running             etcd                      0                   e4969fb2f6fb7       etcd-no-preload-020313                      kube-system
	646b0598f51d7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      48 seconds ago      Running             kube-controller-manager   0                   e0a7714f3f668       kube-controller-manager-no-preload-020313   kube-system
	8b8e8dce006ea       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      48 seconds ago      Running             kube-scheduler            0                   badc8ed91b8e1       kube-scheduler-no-preload-020313            kube-system
	
	
	==> coredns [869d1e5acf9e9669e28f4680ae0ce8cc257836ed01b3c538ea7932a6f3d2bdd5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34498 - 16286 "HINFO IN 5653613499745886060.4584041220863089300. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027058334s
	
	
	==> describe nodes <==
	Name:               no-preload-020313
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-020313
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=no-preload-020313
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T20_17_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 20:16:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-020313
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 20:17:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 20:17:33 +0000   Thu, 09 Oct 2025 20:16:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 20:17:33 +0000   Thu, 09 Oct 2025 20:16:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 20:17:33 +0000   Thu, 09 Oct 2025 20:16:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 20:17:33 +0000   Thu, 09 Oct 2025 20:17:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-020313
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7bb15f82a404ee69d00adc84d5c3c13
	  System UUID:                a3d84e5d-68ba-4d89-bdca-3ce490a9cb49
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-h7jz6                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-020313                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-47kwl                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-020313             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-no-preload-020313    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-cd5v6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-020313             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   Starting                 49s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 49s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node no-preload-020313 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node no-preload-020313 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node no-preload-020313 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  36s                kubelet          Node no-preload-020313 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s                kubelet          Node no-preload-020313 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     36s                kubelet          Node no-preload-020313 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-020313 event: Registered Node no-preload-020313 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-020313 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 19:45] overlayfs: idmapped layers are currently not supported
	[ +36.012100] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:47] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:50] overlayfs: idmapped layers are currently not supported
	[ +27.967875] overlayfs: idmapped layers are currently not supported
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:04] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:15] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:16] overlayfs: idmapped layers are currently not supported
	[ +23.810739] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fb09c7ed84ca5d9d39f9d7299558547432b841f797f0077b1d977117bdc25bc8] <==
	{"level":"warn","ts":"2025-10-09T20:16:56.377679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:56.403190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:56.458443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:56.495588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:56.527541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:56.558096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:56.600261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:56.641520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:56.692648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:56.761271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:56.800496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:56.924934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:56.943044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:57.009698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:57.093082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:57.146603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:57.194653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:57.298478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:57.361305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:57.419362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:57.484455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:57.600997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:57.630646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:57.664145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:16:57.878429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54036","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:17:38 up  2:59,  0 user,  load average: 2.69, 1.70, 1.61
	Linux no-preload-020313 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ebc8c423df0f078307567e89c5f39702875de0f34f1ff36a84550abcc4e5e708] <==
	I1009 20:17:12.604387       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:17:12.604714       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 20:17:12.604883       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:17:12.604926       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:17:12.604964       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:17:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:17:12.811236       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:17:12.811351       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:17:12.811385       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:17:12.811863       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1009 20:17:13.103304       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 20:17:13.103352       1 metrics.go:72] Registering metrics
	I1009 20:17:13.103433       1 controller.go:711] "Syncing nftables rules"
	I1009 20:17:22.819059       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 20:17:22.819115       1 main.go:301] handling current node
	I1009 20:17:32.812759       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 20:17:32.812803       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d9d979fcfcfc50b939277a7588f1a3ac29f58b0747fba3f478647a298272731e] <==
	I1009 20:16:59.470844       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1009 20:16:59.501604       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 20:16:59.506206       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 20:16:59.506313       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 20:16:59.574054       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:16:59.574826       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1009 20:16:59.591032       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:16:59.592291       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 20:17:00.134019       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1009 20:17:00.226247       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1009 20:17:00.226457       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:17:01.431293       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:17:01.500700       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:17:01.676729       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1009 20:17:01.693896       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1009 20:17:01.695596       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 20:17:01.702924       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 20:17:02.320307       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 20:17:02.503132       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 20:17:02.525930       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1009 20:17:02.539897       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1009 20:17:08.258314       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1009 20:17:08.566780       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:17:08.575627       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 20:17:08.613951       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [646b0598f51d7a4696f8cf763c418262b74ec84a7135cd6bc481204fadfd178c] <==
	I1009 20:17:07.478058       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 20:17:07.490755       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 20:17:07.493450       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:17:07.501236       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1009 20:17:07.501378       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:17:07.502042       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 20:17:07.502054       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 20:17:07.501841       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1009 20:17:07.501859       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1009 20:17:07.501868       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 20:17:07.504279       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 20:17:07.504614       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 20:17:07.504734       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 20:17:07.504559       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 20:17:07.509481       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1009 20:17:07.515091       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 20:17:07.515547       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 20:17:07.516805       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1009 20:17:07.517323       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 20:17:07.517444       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-020313"
	I1009 20:17:07.517511       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1009 20:17:07.524705       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 20:17:07.534180       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1009 20:17:07.537680       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 20:17:27.524217       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [613ef92d1443f7124a9b650e10ae340edcf051642144dacbf7de247ce7fc105c] <==
	I1009 20:17:09.512831       1 server_linux.go:53] "Using iptables proxy"
	I1009 20:17:09.688791       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 20:17:09.788996       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 20:17:09.789141       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 20:17:09.789231       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:17:09.912463       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:17:09.912575       1 server_linux.go:132] "Using iptables Proxier"
	I1009 20:17:09.918774       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:17:09.919136       1 server.go:527] "Version info" version="v1.34.1"
	I1009 20:17:09.919156       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:17:09.921935       1 config.go:200] "Starting service config controller"
	I1009 20:17:09.921954       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 20:17:09.921971       1 config.go:106] "Starting endpoint slice config controller"
	I1009 20:17:09.921976       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 20:17:09.921989       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 20:17:09.921993       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 20:17:09.932371       1 config.go:309] "Starting node config controller"
	I1009 20:17:09.934307       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 20:17:09.934337       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 20:17:10.022863       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 20:17:10.022942       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 20:17:10.022962       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8b8e8dce006eae1ba04a647c6058afe9d05c76616bd5be83e40abf42f2fc54c6] <==
	E1009 20:16:59.645871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 20:16:59.645922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 20:16:59.645966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 20:16:59.646007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 20:16:59.646046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 20:16:59.646165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 20:16:59.649475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 20:17:00.541721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 20:17:00.582903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1009 20:17:00.596145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1009 20:17:00.641599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 20:17:00.657968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 20:17:00.696928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 20:17:00.698223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 20:17:00.800920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1009 20:17:00.835047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1009 20:17:00.842414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 20:17:00.954150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 20:17:01.007988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 20:17:01.010461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 20:17:01.026817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 20:17:01.053679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 20:17:01.071157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 20:17:01.091953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1009 20:17:03.590493       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 20:17:08 no-preload-020313 kubelet[1995]: I1009 20:17:08.407235    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws4dj\" (UniqueName: \"kubernetes.io/projected/7843ebcc-c450-40f9-b0dd-6cb09dd70a81-kube-api-access-ws4dj\") pod \"kube-proxy-cd5v6\" (UID: \"7843ebcc-c450-40f9-b0dd-6cb09dd70a81\") " pod="kube-system/kube-proxy-cd5v6"
	Oct 09 20:17:08 no-preload-020313 kubelet[1995]: I1009 20:17:08.407326    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7843ebcc-c450-40f9-b0dd-6cb09dd70a81-kube-proxy\") pod \"kube-proxy-cd5v6\" (UID: \"7843ebcc-c450-40f9-b0dd-6cb09dd70a81\") " pod="kube-system/kube-proxy-cd5v6"
	Oct 09 20:17:08 no-preload-020313 kubelet[1995]: I1009 20:17:08.408052    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7843ebcc-c450-40f9-b0dd-6cb09dd70a81-xtables-lock\") pod \"kube-proxy-cd5v6\" (UID: \"7843ebcc-c450-40f9-b0dd-6cb09dd70a81\") " pod="kube-system/kube-proxy-cd5v6"
	Oct 09 20:17:08 no-preload-020313 kubelet[1995]: I1009 20:17:08.409071    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/60a32ed3-a01b-47ee-9128-d0763b3502ee-cni-cfg\") pod \"kindnet-47kwl\" (UID: \"60a32ed3-a01b-47ee-9128-d0763b3502ee\") " pod="kube-system/kindnet-47kwl"
	Oct 09 20:17:08 no-preload-020313 kubelet[1995]: I1009 20:17:08.409369    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60a32ed3-a01b-47ee-9128-d0763b3502ee-xtables-lock\") pod \"kindnet-47kwl\" (UID: \"60a32ed3-a01b-47ee-9128-d0763b3502ee\") " pod="kube-system/kindnet-47kwl"
	Oct 09 20:17:08 no-preload-020313 kubelet[1995]: I1009 20:17:08.409505    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60a32ed3-a01b-47ee-9128-d0763b3502ee-lib-modules\") pod \"kindnet-47kwl\" (UID: \"60a32ed3-a01b-47ee-9128-d0763b3502ee\") " pod="kube-system/kindnet-47kwl"
	Oct 09 20:17:08 no-preload-020313 kubelet[1995]: I1009 20:17:08.409613    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2nw9\" (UniqueName: \"kubernetes.io/projected/60a32ed3-a01b-47ee-9128-d0763b3502ee-kube-api-access-v2nw9\") pod \"kindnet-47kwl\" (UID: \"60a32ed3-a01b-47ee-9128-d0763b3502ee\") " pod="kube-system/kindnet-47kwl"
	Oct 09 20:17:08 no-preload-020313 kubelet[1995]: E1009 20:17:08.614313    1995 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 09 20:17:08 no-preload-020313 kubelet[1995]: E1009 20:17:08.614349    1995 projected.go:196] Error preparing data for projected volume kube-api-access-v2nw9 for pod kube-system/kindnet-47kwl: configmap "kube-root-ca.crt" not found
	Oct 09 20:17:08 no-preload-020313 kubelet[1995]: E1009 20:17:08.614436    1995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/60a32ed3-a01b-47ee-9128-d0763b3502ee-kube-api-access-v2nw9 podName:60a32ed3-a01b-47ee-9128-d0763b3502ee nodeName:}" failed. No retries permitted until 2025-10-09 20:17:09.11440738 +0000 UTC m=+6.820983316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v2nw9" (UniqueName: "kubernetes.io/projected/60a32ed3-a01b-47ee-9128-d0763b3502ee-kube-api-access-v2nw9") pod "kindnet-47kwl" (UID: "60a32ed3-a01b-47ee-9128-d0763b3502ee") : configmap "kube-root-ca.crt" not found
	Oct 09 20:17:08 no-preload-020313 kubelet[1995]: E1009 20:17:08.614893    1995 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 09 20:17:08 no-preload-020313 kubelet[1995]: E1009 20:17:08.614912    1995 projected.go:196] Error preparing data for projected volume kube-api-access-ws4dj for pod kube-system/kube-proxy-cd5v6: configmap "kube-root-ca.crt" not found
	Oct 09 20:17:08 no-preload-020313 kubelet[1995]: E1009 20:17:08.614970    1995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7843ebcc-c450-40f9-b0dd-6cb09dd70a81-kube-api-access-ws4dj podName:7843ebcc-c450-40f9-b0dd-6cb09dd70a81 nodeName:}" failed. No retries permitted until 2025-10-09 20:17:09.11495572 +0000 UTC m=+6.821531656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ws4dj" (UniqueName: "kubernetes.io/projected/7843ebcc-c450-40f9-b0dd-6cb09dd70a81-kube-api-access-ws4dj") pod "kube-proxy-cd5v6" (UID: "7843ebcc-c450-40f9-b0dd-6cb09dd70a81") : configmap "kube-root-ca.crt" not found
	Oct 09 20:17:09 no-preload-020313 kubelet[1995]: I1009 20:17:09.118702    1995 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 09 20:17:10 no-preload-020313 kubelet[1995]: I1009 20:17:10.683533    1995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cd5v6" podStartSLOduration=2.683513233 podStartE2EDuration="2.683513233s" podCreationTimestamp="2025-10-09 20:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:17:09.698156943 +0000 UTC m=+7.404732920" watchObservedRunningTime="2025-10-09 20:17:10.683513233 +0000 UTC m=+8.390089169"
	Oct 09 20:17:23 no-preload-020313 kubelet[1995]: I1009 20:17:23.146561    1995 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 09 20:17:23 no-preload-020313 kubelet[1995]: I1009 20:17:23.195793    1995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-47kwl" podStartSLOduration=12.13392345 podStartE2EDuration="15.195775245s" podCreationTimestamp="2025-10-09 20:17:08 +0000 UTC" firstStartedPulling="2025-10-09 20:17:09.323129908 +0000 UTC m=+7.029705844" lastFinishedPulling="2025-10-09 20:17:12.384981695 +0000 UTC m=+10.091557639" observedRunningTime="2025-10-09 20:17:12.681469417 +0000 UTC m=+10.388045353" watchObservedRunningTime="2025-10-09 20:17:23.195775245 +0000 UTC m=+20.902351189"
	Oct 09 20:17:23 no-preload-020313 kubelet[1995]: I1009 20:17:23.254616    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50ef033a-7db2-4326-a6d6-574c692f50ba-config-volume\") pod \"coredns-66bc5c9577-h7jz6\" (UID: \"50ef033a-7db2-4326-a6d6-574c692f50ba\") " pod="kube-system/coredns-66bc5c9577-h7jz6"
	Oct 09 20:17:23 no-preload-020313 kubelet[1995]: I1009 20:17:23.254814    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thq84\" (UniqueName: \"kubernetes.io/projected/50ef033a-7db2-4326-a6d6-574c692f50ba-kube-api-access-thq84\") pod \"coredns-66bc5c9577-h7jz6\" (UID: \"50ef033a-7db2-4326-a6d6-574c692f50ba\") " pod="kube-system/coredns-66bc5c9577-h7jz6"
	Oct 09 20:17:23 no-preload-020313 kubelet[1995]: I1009 20:17:23.254935    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/03ca5595-692b-4e09-a599-439b385749c1-tmp\") pod \"storage-provisioner\" (UID: \"03ca5595-692b-4e09-a599-439b385749c1\") " pod="kube-system/storage-provisioner"
	Oct 09 20:17:23 no-preload-020313 kubelet[1995]: I1009 20:17:23.255036    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr7c7\" (UniqueName: \"kubernetes.io/projected/03ca5595-692b-4e09-a599-439b385749c1-kube-api-access-fr7c7\") pod \"storage-provisioner\" (UID: \"03ca5595-692b-4e09-a599-439b385749c1\") " pod="kube-system/storage-provisioner"
	Oct 09 20:17:23 no-preload-020313 kubelet[1995]: W1009 20:17:23.541471    1995 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861/crio-a67ecb0c4fd6d94e225abe48b46000c16dec488ccaa9ecee1d476d9c4eedcac8 WatchSource:0}: Error finding container a67ecb0c4fd6d94e225abe48b46000c16dec488ccaa9ecee1d476d9c4eedcac8: Status 404 returned error can't find the container with id a67ecb0c4fd6d94e225abe48b46000c16dec488ccaa9ecee1d476d9c4eedcac8
	Oct 09 20:17:23 no-preload-020313 kubelet[1995]: I1009 20:17:23.716719    1995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.716699177 podStartE2EDuration="14.716699177s" podCreationTimestamp="2025-10-09 20:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:17:23.701064736 +0000 UTC m=+21.407640697" watchObservedRunningTime="2025-10-09 20:17:23.716699177 +0000 UTC m=+21.423275113"
	Oct 09 20:17:24 no-preload-020313 kubelet[1995]: I1009 20:17:24.705512    1995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-h7jz6" podStartSLOduration=16.705492199 podStartE2EDuration="16.705492199s" podCreationTimestamp="2025-10-09 20:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:17:23.717658723 +0000 UTC m=+21.424234724" watchObservedRunningTime="2025-10-09 20:17:24.705492199 +0000 UTC m=+22.412068135"
	Oct 09 20:17:26 no-preload-020313 kubelet[1995]: I1009 20:17:26.674876    1995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gttt\" (UniqueName: \"kubernetes.io/projected/bee2655f-729a-4600-b1e4-939eef3e8e2b-kube-api-access-5gttt\") pod \"busybox\" (UID: \"bee2655f-729a-4600-b1e4-939eef3e8e2b\") " pod="default/busybox"
	
	
	==> storage-provisioner [3ce5637991450d73497958900c7aedf4deab7fd334a10419aaad966cd01a4799] <==
	I1009 20:17:23.612596       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:17:23.629465       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:17:23.629644       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 20:17:23.633168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:17:23.641727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:17:23.642270       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:17:23.644841       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-020313_dd490bf3-18f5-4680-8010-ba98846db129!
	W1009 20:17:23.645325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:17:23.645861       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2df7659d-a29d-4122-8f28-18add9557e18", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-020313_dd490bf3-18f5-4680-8010-ba98846db129 became leader
	W1009 20:17:23.657187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:17:23.745911       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-020313_dd490bf3-18f5-4680-8010-ba98846db129!
	W1009 20:17:25.660954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:17:25.668440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:17:27.672085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:17:27.679664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:17:29.683057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:17:29.690338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:17:31.694106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:17:31.698619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:17:33.702391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:17:33.707230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:17:35.710617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:17:35.716351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:17:37.721005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:17:37.733071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-020313 -n no-preload-020313
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-020313 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-670649 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-670649 --alsologtostderr -v=1: exit status 80 (2.654379802s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-670649 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 20:17:49.367475  485211 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:17:49.367820  485211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:17:49.367884  485211 out.go:374] Setting ErrFile to fd 2...
	I1009 20:17:49.367907  485211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:17:49.368217  485211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:17:49.368542  485211 out.go:368] Setting JSON to false
	I1009 20:17:49.368614  485211 mustload.go:65] Loading cluster: old-k8s-version-670649
	I1009 20:17:49.369215  485211 config.go:182] Loaded profile config "old-k8s-version-670649": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1009 20:17:49.369775  485211 cli_runner.go:164] Run: docker container inspect old-k8s-version-670649 --format={{.State.Status}}
	I1009 20:17:49.389155  485211 host.go:66] Checking if "old-k8s-version-670649" exists ...
	I1009 20:17:49.389548  485211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:17:49.452824  485211 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-09 20:17:49.442100822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:17:49.453627  485211 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-670649 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1009 20:17:49.459057  485211 out.go:179] * Pausing node old-k8s-version-670649 ... 
	I1009 20:17:49.462170  485211 host.go:66] Checking if "old-k8s-version-670649" exists ...
	I1009 20:17:49.462541  485211 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:49.462596  485211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-670649
	I1009 20:17:49.480688  485211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/old-k8s-version-670649/id_rsa Username:docker}
	I1009 20:17:49.584191  485211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:17:49.605239  485211 pause.go:52] kubelet running: true
	I1009 20:17:49.605331  485211 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:17:49.864671  485211 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:17:49.864772  485211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:17:49.937913  485211 cri.go:89] found id: "cb650f367de61c5e442498f548a4c14a5e49c4b9f22fdd5267e068a8e42bee89"
	I1009 20:17:49.937941  485211 cri.go:89] found id: "d5ab970b7a5ec7cb60f4d5d6366e178aef08471353149dd93fed3173338875b5"
	I1009 20:17:49.937947  485211 cri.go:89] found id: "c00ae0a7c53b38c6eac3d76a7f59448c9a2d7b83553cd419b403f69d70cbc2fd"
	I1009 20:17:49.937951  485211 cri.go:89] found id: "f937ff34590cb60d397ac7c36418ba0efc5150992a944bf4f950e6e18660bffa"
	I1009 20:17:49.937954  485211 cri.go:89] found id: "5311b03994768a53d0ae1759640177709434e395aecd0e575ce30445dd93333a"
	I1009 20:17:49.937958  485211 cri.go:89] found id: "35d9539ca66deeeb7d54f441e4e9faa5be578cf1ccc3e88b622d80472a21a3aa"
	I1009 20:17:49.937961  485211 cri.go:89] found id: "2dbcc3dbc3674682da2fd59a5223bfdbb8dff89e8f24cc4606b92f04b8486139"
	I1009 20:17:49.937965  485211 cri.go:89] found id: "dbb7cba7d5da37c54a17588da4d76f5d70497f3e32a6e495c433bd46fb90292a"
	I1009 20:17:49.937968  485211 cri.go:89] found id: "c615280026154e697494663ebf653ff25eb7cef14b02ea4bc2dce85a23e792fd"
	I1009 20:17:49.938001  485211 cri.go:89] found id: "a71c866e69c210bdbe0a0dd45682a1800e2e4a640ba2d08c75b017add09c2faf"
	I1009 20:17:49.938012  485211 cri.go:89] found id: "e8df8a1bb1ca7ae3c4afd2076b94df79858ba48bc7832eecd672642171f287c3"
	I1009 20:17:49.938016  485211 cri.go:89] found id: ""
	I1009 20:17:49.938078  485211 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:17:49.949865  485211 retry.go:31] will retry after 297.579148ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:17:49Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:17:50.248420  485211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:17:50.263052  485211 pause.go:52] kubelet running: false
	I1009 20:17:50.263122  485211 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:17:50.434775  485211 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:17:50.434887  485211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:17:50.505500  485211 cri.go:89] found id: "cb650f367de61c5e442498f548a4c14a5e49c4b9f22fdd5267e068a8e42bee89"
	I1009 20:17:50.505530  485211 cri.go:89] found id: "d5ab970b7a5ec7cb60f4d5d6366e178aef08471353149dd93fed3173338875b5"
	I1009 20:17:50.505536  485211 cri.go:89] found id: "c00ae0a7c53b38c6eac3d76a7f59448c9a2d7b83553cd419b403f69d70cbc2fd"
	I1009 20:17:50.505540  485211 cri.go:89] found id: "f937ff34590cb60d397ac7c36418ba0efc5150992a944bf4f950e6e18660bffa"
	I1009 20:17:50.505544  485211 cri.go:89] found id: "5311b03994768a53d0ae1759640177709434e395aecd0e575ce30445dd93333a"
	I1009 20:17:50.505548  485211 cri.go:89] found id: "35d9539ca66deeeb7d54f441e4e9faa5be578cf1ccc3e88b622d80472a21a3aa"
	I1009 20:17:50.505569  485211 cri.go:89] found id: "2dbcc3dbc3674682da2fd59a5223bfdbb8dff89e8f24cc4606b92f04b8486139"
	I1009 20:17:50.505578  485211 cri.go:89] found id: "dbb7cba7d5da37c54a17588da4d76f5d70497f3e32a6e495c433bd46fb90292a"
	I1009 20:17:50.505582  485211 cri.go:89] found id: "c615280026154e697494663ebf653ff25eb7cef14b02ea4bc2dce85a23e792fd"
	I1009 20:17:50.505592  485211 cri.go:89] found id: "a71c866e69c210bdbe0a0dd45682a1800e2e4a640ba2d08c75b017add09c2faf"
	I1009 20:17:50.505599  485211 cri.go:89] found id: "e8df8a1bb1ca7ae3c4afd2076b94df79858ba48bc7832eecd672642171f287c3"
	I1009 20:17:50.505602  485211 cri.go:89] found id: ""
	I1009 20:17:50.505660  485211 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:17:50.517778  485211 retry.go:31] will retry after 405.329185ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:17:50Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:17:50.923317  485211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:17:50.937490  485211 pause.go:52] kubelet running: false
	I1009 20:17:50.937580  485211 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:17:51.136314  485211 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:17:51.136407  485211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:17:51.244836  485211 cri.go:89] found id: "cb650f367de61c5e442498f548a4c14a5e49c4b9f22fdd5267e068a8e42bee89"
	I1009 20:17:51.244868  485211 cri.go:89] found id: "d5ab970b7a5ec7cb60f4d5d6366e178aef08471353149dd93fed3173338875b5"
	I1009 20:17:51.244873  485211 cri.go:89] found id: "c00ae0a7c53b38c6eac3d76a7f59448c9a2d7b83553cd419b403f69d70cbc2fd"
	I1009 20:17:51.244877  485211 cri.go:89] found id: "f937ff34590cb60d397ac7c36418ba0efc5150992a944bf4f950e6e18660bffa"
	I1009 20:17:51.244880  485211 cri.go:89] found id: "5311b03994768a53d0ae1759640177709434e395aecd0e575ce30445dd93333a"
	I1009 20:17:51.244884  485211 cri.go:89] found id: "35d9539ca66deeeb7d54f441e4e9faa5be578cf1ccc3e88b622d80472a21a3aa"
	I1009 20:17:51.244887  485211 cri.go:89] found id: "2dbcc3dbc3674682da2fd59a5223bfdbb8dff89e8f24cc4606b92f04b8486139"
	I1009 20:17:51.244891  485211 cri.go:89] found id: "dbb7cba7d5da37c54a17588da4d76f5d70497f3e32a6e495c433bd46fb90292a"
	I1009 20:17:51.244894  485211 cri.go:89] found id: "c615280026154e697494663ebf653ff25eb7cef14b02ea4bc2dce85a23e792fd"
	I1009 20:17:51.244901  485211 cri.go:89] found id: "a71c866e69c210bdbe0a0dd45682a1800e2e4a640ba2d08c75b017add09c2faf"
	I1009 20:17:51.244911  485211 cri.go:89] found id: "e8df8a1bb1ca7ae3c4afd2076b94df79858ba48bc7832eecd672642171f287c3"
	I1009 20:17:51.244915  485211 cri.go:89] found id: ""
	I1009 20:17:51.244984  485211 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:17:51.259978  485211 retry.go:31] will retry after 310.828267ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:17:51Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:17:51.571514  485211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:17:51.587344  485211 pause.go:52] kubelet running: false
	I1009 20:17:51.587411  485211 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:17:51.812751  485211 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:17:51.812842  485211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:17:51.919538  485211 cri.go:89] found id: "cb650f367de61c5e442498f548a4c14a5e49c4b9f22fdd5267e068a8e42bee89"
	I1009 20:17:51.919562  485211 cri.go:89] found id: "d5ab970b7a5ec7cb60f4d5d6366e178aef08471353149dd93fed3173338875b5"
	I1009 20:17:51.919567  485211 cri.go:89] found id: "c00ae0a7c53b38c6eac3d76a7f59448c9a2d7b83553cd419b403f69d70cbc2fd"
	I1009 20:17:51.919570  485211 cri.go:89] found id: "f937ff34590cb60d397ac7c36418ba0efc5150992a944bf4f950e6e18660bffa"
	I1009 20:17:51.919573  485211 cri.go:89] found id: "5311b03994768a53d0ae1759640177709434e395aecd0e575ce30445dd93333a"
	I1009 20:17:51.919653  485211 cri.go:89] found id: "35d9539ca66deeeb7d54f441e4e9faa5be578cf1ccc3e88b622d80472a21a3aa"
	I1009 20:17:51.919662  485211 cri.go:89] found id: "2dbcc3dbc3674682da2fd59a5223bfdbb8dff89e8f24cc4606b92f04b8486139"
	I1009 20:17:51.919665  485211 cri.go:89] found id: "dbb7cba7d5da37c54a17588da4d76f5d70497f3e32a6e495c433bd46fb90292a"
	I1009 20:17:51.919669  485211 cri.go:89] found id: "c615280026154e697494663ebf653ff25eb7cef14b02ea4bc2dce85a23e792fd"
	I1009 20:17:51.919688  485211 cri.go:89] found id: "a71c866e69c210bdbe0a0dd45682a1800e2e4a640ba2d08c75b017add09c2faf"
	I1009 20:17:51.919692  485211 cri.go:89] found id: "e8df8a1bb1ca7ae3c4afd2076b94df79858ba48bc7832eecd672642171f287c3"
	I1009 20:17:51.919694  485211 cri.go:89] found id: ""
	I1009 20:17:51.919764  485211 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:17:51.934986  485211 out.go:203] 
	W1009 20:17:51.937831  485211 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:17:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:17:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 20:17:51.937856  485211 out.go:285] * 
	* 
	W1009 20:17:51.945574  485211 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:17:51.948728  485211 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-670649 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-670649
helpers_test.go:243: (dbg) docker inspect old-k8s-version-670649:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d",
	        "Created": "2025-10-09T20:15:15.014520334Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 481515,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:16:38.13820366Z",
	            "FinishedAt": "2025-10-09T20:16:35.507324918Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/hostname",
	        "HostsPath": "/var/lib/docker/containers/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/hosts",
	        "LogPath": "/var/lib/docker/containers/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d-json.log",
	        "Name": "/old-k8s-version-670649",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-670649:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-670649",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d",
	                "LowerDir": "/var/lib/docker/overlay2/27381f7e732d8c7d661645d8c8bce4a7b4487d7ccc8446c8ec75884f80dfc2aa-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/27381f7e732d8c7d661645d8c8bce4a7b4487d7ccc8446c8ec75884f80dfc2aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/27381f7e732d8c7d661645d8c8bce4a7b4487d7ccc8446c8ec75884f80dfc2aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/27381f7e732d8c7d661645d8c8bce4a7b4487d7ccc8446c8ec75884f80dfc2aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-670649",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-670649/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-670649",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-670649",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-670649",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b9d636cca443fc40e683831175ed0fd35e707a8bee5a5ea62739b2547fd638cb",
	            "SandboxKey": "/var/run/docker/netns/b9d636cca443",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-670649": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:3a:6e:86:87:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f71ce8c90e918d3740f414c21f48298da6003535f949f572c810d48866acbdf",
	                    "EndpointID": "6c4b1f01efcfbc59c1dd4dd21971719b3bdf1fa4db2bee28353047e490d333cf",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-670649",
	                        "242f5a73bf34"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-670649 -n old-k8s-version-670649
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-670649 -n old-k8s-version-670649: exit status 2 (525.069992ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-670649 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-670649 logs -n 25: (1.466852054s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-535911 sudo crio config                                                                                                                                                                                                             │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ delete  │ -p cilium-535911                                                                                                                                                                                                                              │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │ 09 Oct 25 20:05 UTC │
	│ start   │ -p force-systemd-env-242564 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-242564  │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ force-systemd-flag-736218 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-736218 │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	│ delete  │ -p force-systemd-flag-736218                                                                                                                                                                                                                  │ force-systemd-flag-736218 │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	│ start   │ -p cert-expiration-282540 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-282540    │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	│ delete  │ -p force-systemd-env-242564                                                                                                                                                                                                                   │ force-systemd-env-242564  │ jenkins │ v1.37.0 │ 09 Oct 25 20:14 UTC │ 09 Oct 25 20:14 UTC │
	│ start   │ -p cert-options-038875 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-038875       │ jenkins │ v1.37.0 │ 09 Oct 25 20:14 UTC │ 09 Oct 25 20:15 UTC │
	│ ssh     │ cert-options-038875 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-038875       │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ ssh     │ -p cert-options-038875 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-038875       │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ delete  │ -p cert-options-038875                                                                                                                                                                                                                        │ cert-options-038875       │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ start   │ -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p cert-expiration-282540 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-282540    │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:16 UTC │
	│ delete  │ -p cert-expiration-282540                                                                                                                                                                                                                     │ cert-expiration-282540    │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020313         │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:17 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-670649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │                     │
	│ stop    │ -p old-k8s-version-670649 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-670649 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-020313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020313         │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	│ stop    │ -p no-preload-020313 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-020313         │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ image   │ old-k8s-version-670649 image list --format=json                                                                                                                                                                                               │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ pause   │ -p old-k8s-version-670649 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-020313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-020313         │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ start   │ -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020313         │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:17:51
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:17:51.705859  485563 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:17:51.706105  485563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:17:51.706132  485563 out.go:374] Setting ErrFile to fd 2...
	I1009 20:17:51.706150  485563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:17:51.706475  485563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:17:51.706929  485563 out.go:368] Setting JSON to false
	I1009 20:17:51.714177  485563 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10811,"bootTime":1760030261,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:17:51.714312  485563 start.go:143] virtualization:  
	I1009 20:17:51.718069  485563 out.go:179] * [no-preload-020313] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:17:51.721947  485563 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:17:51.722177  485563 notify.go:221] Checking for updates...
	I1009 20:17:51.728340  485563 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:17:51.731309  485563 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:17:51.734363  485563 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:17:51.737344  485563 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:17:51.740394  485563 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:17:51.743903  485563 config.go:182] Loaded profile config "no-preload-020313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:17:51.744541  485563 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:17:51.766637  485563 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:17:51.766844  485563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:17:51.873522  485563 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:17:51.861750759 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:17:51.873639  485563 docker.go:319] overlay module found
	I1009 20:17:51.876851  485563 out.go:179] * Using the docker driver based on existing profile
	I1009 20:17:51.879721  485563 start.go:309] selected driver: docker
	I1009 20:17:51.879743  485563 start.go:930] validating driver "docker" against &{Name:no-preload-020313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020313 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:17:51.879843  485563 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:17:51.880590  485563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:17:51.985418  485563 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:17:51.973012117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:17:51.985769  485563 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:17:51.985795  485563 cni.go:84] Creating CNI manager for ""
	I1009 20:17:51.985859  485563 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:17:51.985894  485563 start.go:353] cluster config:
	{Name:no-preload-020313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:17:51.989277  485563 out.go:179] * Starting "no-preload-020313" primary control-plane node in "no-preload-020313" cluster
	I1009 20:17:51.992697  485563 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:17:51.996456  485563 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:17:51.999912  485563 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:17:52.000090  485563 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/config.json ...
	I1009 20:17:52.000471  485563 cache.go:107] acquiring lock: {Name:mk067853efdb9d5dfe210e9bdb60a1140d344bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.000573  485563 cache.go:115] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1009 20:17:52.000588  485563 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 136.543µs
	I1009 20:17:52.000610  485563 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1009 20:17:52.000626  485563 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:17:52.000869  485563 cache.go:107] acquiring lock: {Name:mk549023c9da29243b6f2f23c58ca3df426147a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.000951  485563 cache.go:115] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1009 20:17:52.000961  485563 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 98.594µs
	I1009 20:17:52.000968  485563 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1009 20:17:52.000981  485563 cache.go:107] acquiring lock: {Name:mk9525a25fb678d6580f1eb602de12141a8b59a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.001012  485563 cache.go:115] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1009 20:17:52.001028  485563 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 38.335µs
	I1009 20:17:52.001102  485563 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1009 20:17:52.001153  485563 cache.go:107] acquiring lock: {Name:mk65f6488cbc08e9947528f7f60d66925e264a10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.001197  485563 cache.go:115] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1009 20:17:52.001202  485563 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 52.169µs
	I1009 20:17:52.001208  485563 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1009 20:17:52.001218  485563 cache.go:107] acquiring lock: {Name:mkef8cd450b6ec8be1600cd17c6da55958b25391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.001246  485563 cache.go:115] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1009 20:17:52.001337  485563 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 119.501µs
	I1009 20:17:52.001346  485563 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1009 20:17:52.001362  485563 cache.go:107] acquiring lock: {Name:mkd217de9f557eca101e9a8593531ca54ad0485b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.001413  485563 cache.go:115] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1009 20:17:52.001419  485563 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 59.513µs
	I1009 20:17:52.001425  485563 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1009 20:17:52.001435  485563 cache.go:107] acquiring lock: {Name:mkac1bf7d8d221e16de37f34c6c9a23b671148bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.001463  485563 cache.go:115] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1009 20:17:52.001468  485563 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 35.586µs
	I1009 20:17:52.001474  485563 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1009 20:17:52.001484  485563 cache.go:107] acquiring lock: {Name:mkd5d0f835b5a82fe0ea91a553ed69cdedb24993 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.001512  485563 cache.go:115] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1009 20:17:52.001516  485563 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 33.355µs
	I1009 20:17:52.001522  485563 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1009 20:17:52.001530  485563 cache.go:87] Successfully saved all images to host disk.
	I1009 20:17:52.038396  485563 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:17:52.038418  485563 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:17:52.038435  485563 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:17:52.038458  485563 start.go:361] acquireMachinesLock for no-preload-020313: {Name:mkd16c652d3af42b77740f1793cec5d9870abaca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.038518  485563 start.go:365] duration metric: took 43.012µs to acquireMachinesLock for "no-preload-020313"
	I1009 20:17:52.038546  485563 start.go:97] Skipping create...Using existing machine configuration
	I1009 20:17:52.038554  485563 fix.go:55] fixHost starting: 
	I1009 20:17:52.038817  485563 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Status}}
	I1009 20:17:52.068678  485563 fix.go:113] recreateIfNeeded on no-preload-020313: state=Stopped err=<nil>
	W1009 20:17:52.068725  485563 fix.go:139] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.570698259Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.573951565Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.573987643Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.574011882Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.578075948Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.578113561Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.578137733Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.581721637Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.581792399Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.581815439Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.585079962Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.585141304Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.623981555Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=28630fd2-fe1b-4a73-bd2f-45a2cfd709cf name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.629433333Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=eca116de-284a-4015-aa3c-06cc2dacb04e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.633458252Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd/dashboard-metrics-scraper" id=198e173f-8d37-4b11-ab94-8872f5a6ad7c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.633955687Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.651227285Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.655594831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.675888372Z" level=info msg="Created container a71c866e69c210bdbe0a0dd45682a1800e2e4a640ba2d08c75b017add09c2faf: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd/dashboard-metrics-scraper" id=198e173f-8d37-4b11-ab94-8872f5a6ad7c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.682072977Z" level=info msg="Starting container: a71c866e69c210bdbe0a0dd45682a1800e2e4a640ba2d08c75b017add09c2faf" id=4637fc46-64fa-4df4-946c-b902558f235d name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.685689825Z" level=info msg="Started container" PID=1700 containerID=a71c866e69c210bdbe0a0dd45682a1800e2e4a640ba2d08c75b017add09c2faf description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd/dashboard-metrics-scraper id=4637fc46-64fa-4df4-946c-b902558f235d name=/runtime.v1.RuntimeService/StartContainer sandboxID=f50acb1ed89e669a2317f2d7df5b34e347b3f6eb90bf162677952070aa3c568a
	Oct 09 20:17:36 old-k8s-version-670649 conmon[1698]: conmon a71c866e69c210bdbe0a <ninfo>: container 1700 exited with status 1
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.85864975Z" level=info msg="Removing container: 6ed7fe604c072156684da501c2834d007b87c77764f5fe155f964fb6d5c7099d" id=c19a348a-84a9-4b3b-b26a-0f3a15ccaba1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.88192749Z" level=info msg="Error loading conmon cgroup of container 6ed7fe604c072156684da501c2834d007b87c77764f5fe155f964fb6d5c7099d: cgroup deleted" id=c19a348a-84a9-4b3b-b26a-0f3a15ccaba1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.887138641Z" level=info msg="Removed container 6ed7fe604c072156684da501c2834d007b87c77764f5fe155f964fb6d5c7099d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd/dashboard-metrics-scraper" id=c19a348a-84a9-4b3b-b26a-0f3a15ccaba1 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	a71c866e69c21       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago       Exited              dashboard-metrics-scraper   2                   f50acb1ed89e6       dashboard-metrics-scraper-5f989dc9cf-2jqxd       kubernetes-dashboard
	cb650f367de61       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   ffa005d21ceb5       storage-provisioner                              kube-system
	e8df8a1bb1ca7       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago       Running             kubernetes-dashboard        0                   66d20e9b28f26       kubernetes-dashboard-8694d4445c-pv4kt            kubernetes-dashboard
	5133146e72c53       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   ce1b38c84a89c       busybox                                          default
	d5ab970b7a5ec       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           57 seconds ago       Running             coredns                     1                   be7df4ec58dc0       coredns-5dd5756b68-kz799                         kube-system
	c00ae0a7c53b3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   ffa005d21ceb5       storage-provisioner                              kube-system
	f937ff34590cb       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           57 seconds ago       Running             kube-proxy                  1                   bd4432cbf4b5c       kube-proxy-fffc5                                 kube-system
	5311b03994768       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   ce48f60386d6c       kindnet-4nzl2                                    kube-system
	35d9539ca66de       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   57207cac49537       etcd-old-k8s-version-670649                      kube-system
	2dbcc3dbc3674       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   a9c1abda9be01       kube-controller-manager-old-k8s-version-670649   kube-system
	dbb7cba7d5da3       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   b9726c0874c26       kube-apiserver-old-k8s-version-670649            kube-system
	c615280026154       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   8022b4eb694d1       kube-scheduler-old-k8s-version-670649            kube-system
	
	
	==> coredns [d5ab970b7a5ec7cb60f4d5d6366e178aef08471353149dd93fed3173338875b5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36565 - 26139 "HINFO IN 6610836821449462148.1132489493451489728. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020811955s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-670649
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-670649
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=old-k8s-version-670649
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T20_15_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 20:15:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-670649
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 20:17:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 20:17:46 +0000   Thu, 09 Oct 2025 20:15:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 20:17:46 +0000   Thu, 09 Oct 2025 20:15:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 20:17:46 +0000   Thu, 09 Oct 2025 20:15:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 20:17:46 +0000   Thu, 09 Oct 2025 20:16:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-670649
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 520117eb2c614e32ba12562d1c4a855b
	  System UUID:                d2088d50-dda3-441d-a1ce-e5d6a3366421
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 coredns-5dd5756b68-kz799                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m
	  kube-system                 etcd-old-k8s-version-670649                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m12s
	  kube-system                 kindnet-4nzl2                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m
	  kube-system                 kube-apiserver-old-k8s-version-670649             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-controller-manager-old-k8s-version-670649    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-proxy-fffc5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-scheduler-old-k8s-version-670649             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-2jqxd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-pv4kt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 119s                   kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  Starting                 2m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m20s (x8 over 2m20s)  kubelet          Node old-k8s-version-670649 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m20s (x8 over 2m20s)  kubelet          Node old-k8s-version-670649 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m20s (x8 over 2m20s)  kubelet          Node old-k8s-version-670649 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m13s                  kubelet          Node old-k8s-version-670649 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m13s                  kubelet          Node old-k8s-version-670649 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m13s                  kubelet          Node old-k8s-version-670649 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m13s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m1s                   node-controller  Node old-k8s-version-670649 event: Registered Node old-k8s-version-670649 in Controller
	  Normal  NodeReady                106s                   kubelet          Node old-k8s-version-670649 status is now: NodeReady
	  Normal  Starting                 68s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s (x8 over 68s)      kubelet          Node old-k8s-version-670649 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x8 over 68s)      kubelet          Node old-k8s-version-670649 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x8 over 68s)      kubelet          Node old-k8s-version-670649 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                    node-controller  Node old-k8s-version-670649 event: Registered Node old-k8s-version-670649 in Controller
	
	
	==> dmesg <==
	[Oct 9 19:45] overlayfs: idmapped layers are currently not supported
	[ +36.012100] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:47] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:50] overlayfs: idmapped layers are currently not supported
	[ +27.967875] overlayfs: idmapped layers are currently not supported
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:04] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:15] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:16] overlayfs: idmapped layers are currently not supported
	[ +23.810739] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [35d9539ca66deeeb7d54f441e4e9faa5be578cf1ccc3e88b622d80472a21a3aa] <==
	{"level":"info","ts":"2025-10-09T20:16:46.567983Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-09T20:16:46.568382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-09T20:16:46.568456Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-09T20:16:46.568547Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T20:16:46.568582Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T20:16:46.57024Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-09T20:16:46.570436Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-09T20:16:46.570477Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-09T20:16:46.570561Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-09T20:16:46.570568Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-09T20:16:47.593203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-09T20:16:47.59325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-09T20:16:47.593266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-09T20:16:47.59328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-09T20:16:47.593287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-09T20:16:47.593297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-09T20:16:47.593305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-09T20:16:47.601411Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-670649 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-09T20:16:47.601616Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-09T20:16:47.602581Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-09T20:16:47.60263Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-09T20:16:47.603403Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-09T20:16:47.603468Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-09T20:16:47.603483Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-09T20:16:56.388529Z","caller":"traceutil/trace.go:171","msg":"trace[488274790] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"104.530086ms","start":"2025-10-09T20:16:56.283982Z","end":"2025-10-09T20:16:56.388512Z","steps":["trace[488274790] 'process raft request'  (duration: 94.490382ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:17:53 up  3:00,  0 user,  load average: 2.45, 1.70, 1.61
	Linux old-k8s-version-670649 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5311b03994768a53d0ae1759640177709434e395aecd0e575ce30445dd93333a] <==
	I1009 20:16:56.347881       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:16:56.365253       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1009 20:16:56.365387       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:16:56.365400       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:16:56.365415       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:16:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:16:56.566678       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:16:56.566708       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:16:56.566717       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:16:56.567496       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 20:17:26.569198       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1009 20:17:26.569389       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 20:17:26.569475       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 20:17:26.569500       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1009 20:17:28.166832       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 20:17:28.167590       1 metrics.go:72] Registering metrics
	I1009 20:17:28.167672       1 controller.go:711] "Syncing nftables rules"
	I1009 20:17:36.565466       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 20:17:36.565550       1 main.go:301] handling current node
	I1009 20:17:46.565421       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 20:17:46.565454       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dbb7cba7d5da37c54a17588da4d76f5d70497f3e32a6e495c433bd46fb90292a] <==
	I1009 20:16:54.833050       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 20:16:54.850060       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1009 20:16:54.885735       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1009 20:16:54.885766       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1009 20:16:54.885978       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1009 20:16:54.886024       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 20:16:54.895295       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1009 20:16:54.895346       1 shared_informer.go:318] Caches are synced for configmaps
	I1009 20:16:54.895392       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1009 20:16:54.905312       1 aggregator.go:166] initial CRD sync complete...
	I1009 20:16:54.905337       1 autoregister_controller.go:141] Starting autoregister controller
	I1009 20:16:54.905345       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 20:16:54.905351       1 cache.go:39] Caches are synced for autoregister controller
	E1009 20:16:55.077475       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 20:16:55.418244       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:16:59.049162       1 controller.go:624] quota admission added evaluator for: namespaces
	I1009 20:16:59.112238       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1009 20:16:59.161553       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:16:59.175867       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:16:59.186912       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1009 20:16:59.260895       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.24.227"}
	I1009 20:16:59.299477       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.10.121"}
	I1009 20:17:09.621662       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 20:17:09.927196       1 controller.go:624] quota admission added evaluator for: endpoints
	I1009 20:17:10.079628       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2dbcc3dbc3674682da2fd59a5223bfdbb8dff89e8f24cc4606b92f04b8486139] <==
	I1009 20:17:09.988700       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.435µs"
	I1009 20:17:10.099935       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1009 20:17:10.104880       1 shared_informer.go:318] Caches are synced for garbage collector
	I1009 20:17:10.105460       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1009 20:17:10.114959       1 shared_informer.go:318] Caches are synced for garbage collector
	I1009 20:17:10.115003       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1009 20:17:10.119729       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-2jqxd"
	I1009 20:17:10.133097       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-pv4kt"
	I1009 20:17:10.173420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="85.171253ms"
	I1009 20:17:10.186397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.302019ms"
	I1009 20:17:10.235558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.107206ms"
	I1009 20:17:10.239559       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="1.727147ms"
	I1009 20:17:10.261539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.499µs"
	I1009 20:17:10.295991       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="122.43494ms"
	I1009 20:17:10.296192       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="83.603µs"
	I1009 20:17:10.300169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="54.277µs"
	I1009 20:17:16.809896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.355µs"
	I1009 20:17:17.827339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.647µs"
	I1009 20:17:18.827954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.213µs"
	I1009 20:17:21.854083       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="23.455476ms"
	I1009 20:17:21.854222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.754µs"
	I1009 20:17:35.854349       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.263675ms"
	I1009 20:17:35.854756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.596µs"
	I1009 20:17:36.880443       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.223µs"
	I1009 20:17:41.970850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.459µs"
	
	
	==> kube-proxy [f937ff34590cb60d397ac7c36418ba0efc5150992a944bf4f950e6e18660bffa] <==
	I1009 20:16:57.677329       1 server_others.go:69] "Using iptables proxy"
	I1009 20:16:57.798535       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1009 20:16:58.086495       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:16:58.095226       1 server_others.go:152] "Using iptables Proxier"
	I1009 20:16:58.095330       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1009 20:16:58.095364       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1009 20:16:58.095423       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1009 20:16:58.095657       1 server.go:846] "Version info" version="v1.28.0"
	I1009 20:16:58.095877       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:16:58.096636       1 config.go:188] "Starting service config controller"
	I1009 20:16:58.096737       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1009 20:16:58.096784       1 config.go:97] "Starting endpoint slice config controller"
	I1009 20:16:58.096810       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1009 20:16:58.101565       1 config.go:315] "Starting node config controller"
	I1009 20:16:58.102727       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1009 20:16:58.197695       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1009 20:16:58.197790       1 shared_informer.go:318] Caches are synced for service config
	I1009 20:16:58.213875       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [c615280026154e697494663ebf653ff25eb7cef14b02ea4bc2dce85a23e792fd] <==
	I1009 20:16:51.674180       1 serving.go:348] Generated self-signed cert in-memory
	I1009 20:16:57.261704       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1009 20:16:57.261734       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:16:57.299797       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1009 20:16:57.299923       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1009 20:16:57.299937       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1009 20:16:57.299951       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1009 20:16:57.319228       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:16:57.319331       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 20:16:57.319374       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:16:57.323872       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1009 20:16:57.445227       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 20:16:57.501505       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1009 20:16:57.524598       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 09 20:17:11 old-k8s-version-670649 kubelet[779]: E1009 20:17:11.331209     779 projected.go:198] Error preparing data for projected volume kube-api-access-kd9g4 for pod kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pv4kt: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:17:11 old-k8s-version-670649 kubelet[779]: E1009 20:17:11.333460     779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/475f099e-d888-4a61-99b6-f4998e50936f-kube-api-access-9n2q6 podName:475f099e-d888-4a61-99b6-f4998e50936f nodeName:}" failed. No retries permitted until 2025-10-09 20:17:11.832722681 +0000 UTC m=+26.528121130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9n2q6" (UniqueName: "kubernetes.io/projected/475f099e-d888-4a61-99b6-f4998e50936f-kube-api-access-9n2q6") pod "dashboard-metrics-scraper-5f989dc9cf-2jqxd" (UID: "475f099e-d888-4a61-99b6-f4998e50936f") : failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:17:11 old-k8s-version-670649 kubelet[779]: E1009 20:17:11.333531     779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a62b0cc0-36f9-44f2-96c7-87aed1665f8d-kube-api-access-kd9g4 podName:a62b0cc0-36f9-44f2-96c7-87aed1665f8d nodeName:}" failed. No retries permitted until 2025-10-09 20:17:11.833507218 +0000 UTC m=+26.528905618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kd9g4" (UniqueName: "kubernetes.io/projected/a62b0cc0-36f9-44f2-96c7-87aed1665f8d-kube-api-access-kd9g4") pod "kubernetes-dashboard-8694d4445c-pv4kt" (UID: "a62b0cc0-36f9-44f2-96c7-87aed1665f8d") : failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:17:11 old-k8s-version-670649 kubelet[779]: W1009 20:17:11.995167     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/crio-f50acb1ed89e669a2317f2d7df5b34e347b3f6eb90bf162677952070aa3c568a WatchSource:0}: Error finding container f50acb1ed89e669a2317f2d7df5b34e347b3f6eb90bf162677952070aa3c568a: Status 404 returned error can't find the container with id f50acb1ed89e669a2317f2d7df5b34e347b3f6eb90bf162677952070aa3c568a
	Oct 09 20:17:12 old-k8s-version-670649 kubelet[779]: W1009 20:17:12.021498     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/crio-66d20e9b28f262feb5c47848ca91efccce0a159344f085dba649e04d7fc7dcd2 WatchSource:0}: Error finding container 66d20e9b28f262feb5c47848ca91efccce0a159344f085dba649e04d7fc7dcd2: Status 404 returned error can't find the container with id 66d20e9b28f262feb5c47848ca91efccce0a159344f085dba649e04d7fc7dcd2
	Oct 09 20:17:16 old-k8s-version-670649 kubelet[779]: I1009 20:17:16.795300     779 scope.go:117] "RemoveContainer" containerID="126370ef452b90fed472d5d1ac8802bfdd638bc5cec858657c8691433a06ce6d"
	Oct 09 20:17:17 old-k8s-version-670649 kubelet[779]: I1009 20:17:17.802729     779 scope.go:117] "RemoveContainer" containerID="6ed7fe604c072156684da501c2834d007b87c77764f5fe155f964fb6d5c7099d"
	Oct 09 20:17:17 old-k8s-version-670649 kubelet[779]: E1009 20:17:17.803010     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2jqxd_kubernetes-dashboard(475f099e-d888-4a61-99b6-f4998e50936f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd" podUID="475f099e-d888-4a61-99b6-f4998e50936f"
	Oct 09 20:17:17 old-k8s-version-670649 kubelet[779]: I1009 20:17:17.810876     779 scope.go:117] "RemoveContainer" containerID="126370ef452b90fed472d5d1ac8802bfdd638bc5cec858657c8691433a06ce6d"
	Oct 09 20:17:18 old-k8s-version-670649 kubelet[779]: I1009 20:17:18.805725     779 scope.go:117] "RemoveContainer" containerID="6ed7fe604c072156684da501c2834d007b87c77764f5fe155f964fb6d5c7099d"
	Oct 09 20:17:18 old-k8s-version-670649 kubelet[779]: E1009 20:17:18.806005     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2jqxd_kubernetes-dashboard(475f099e-d888-4a61-99b6-f4998e50936f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd" podUID="475f099e-d888-4a61-99b6-f4998e50936f"
	Oct 09 20:17:21 old-k8s-version-670649 kubelet[779]: I1009 20:17:21.956044     779 scope.go:117] "RemoveContainer" containerID="6ed7fe604c072156684da501c2834d007b87c77764f5fe155f964fb6d5c7099d"
	Oct 09 20:17:21 old-k8s-version-670649 kubelet[779]: E1009 20:17:21.956855     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2jqxd_kubernetes-dashboard(475f099e-d888-4a61-99b6-f4998e50936f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd" podUID="475f099e-d888-4a61-99b6-f4998e50936f"
	Oct 09 20:17:27 old-k8s-version-670649 kubelet[779]: I1009 20:17:27.830741     779 scope.go:117] "RemoveContainer" containerID="c00ae0a7c53b38c6eac3d76a7f59448c9a2d7b83553cd419b403f69d70cbc2fd"
	Oct 09 20:17:27 old-k8s-version-670649 kubelet[779]: I1009 20:17:27.899971     779 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pv4kt" podStartSLOduration=8.278069781 podCreationTimestamp="2025-10-09 20:17:10 +0000 UTC" firstStartedPulling="2025-10-09 20:17:12.03108538 +0000 UTC m=+26.726483780" lastFinishedPulling="2025-10-09 20:17:21.651545865 +0000 UTC m=+36.346944264" observedRunningTime="2025-10-09 20:17:21.829165875 +0000 UTC m=+36.524564299" watchObservedRunningTime="2025-10-09 20:17:27.898530265 +0000 UTC m=+42.593928673"
	Oct 09 20:17:36 old-k8s-version-670649 kubelet[779]: I1009 20:17:36.623374     779 scope.go:117] "RemoveContainer" containerID="6ed7fe604c072156684da501c2834d007b87c77764f5fe155f964fb6d5c7099d"
	Oct 09 20:17:36 old-k8s-version-670649 kubelet[779]: I1009 20:17:36.853077     779 scope.go:117] "RemoveContainer" containerID="6ed7fe604c072156684da501c2834d007b87c77764f5fe155f964fb6d5c7099d"
	Oct 09 20:17:36 old-k8s-version-670649 kubelet[779]: I1009 20:17:36.853973     779 scope.go:117] "RemoveContainer" containerID="a71c866e69c210bdbe0a0dd45682a1800e2e4a640ba2d08c75b017add09c2faf"
	Oct 09 20:17:36 old-k8s-version-670649 kubelet[779]: E1009 20:17:36.854247     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2jqxd_kubernetes-dashboard(475f099e-d888-4a61-99b6-f4998e50936f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd" podUID="475f099e-d888-4a61-99b6-f4998e50936f"
	Oct 09 20:17:41 old-k8s-version-670649 kubelet[779]: I1009 20:17:41.955922     779 scope.go:117] "RemoveContainer" containerID="a71c866e69c210bdbe0a0dd45682a1800e2e4a640ba2d08c75b017add09c2faf"
	Oct 09 20:17:41 old-k8s-version-670649 kubelet[779]: E1009 20:17:41.956794     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2jqxd_kubernetes-dashboard(475f099e-d888-4a61-99b6-f4998e50936f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd" podUID="475f099e-d888-4a61-99b6-f4998e50936f"
	Oct 09 20:17:49 old-k8s-version-670649 kubelet[779]: I1009 20:17:49.800216     779 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 09 20:17:49 old-k8s-version-670649 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 20:17:49 old-k8s-version-670649 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 20:17:49 old-k8s-version-670649 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e8df8a1bb1ca7ae3c4afd2076b94df79858ba48bc7832eecd672642171f287c3] <==
	2025/10/09 20:17:21 Using namespace: kubernetes-dashboard
	2025/10/09 20:17:21 Using in-cluster config to connect to apiserver
	2025/10/09 20:17:21 Using secret token for csrf signing
	2025/10/09 20:17:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/09 20:17:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/09 20:17:21 Successful initial request to the apiserver, version: v1.28.0
	2025/10/09 20:17:21 Generating JWE encryption key
	2025/10/09 20:17:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/09 20:17:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/09 20:17:21 Initializing JWE encryption key from synchronized object
	2025/10/09 20:17:21 Creating in-cluster Sidecar client
	2025/10/09 20:17:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 20:17:21 Serving insecurely on HTTP port: 9090
	2025/10/09 20:17:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 20:17:21 Starting overwatch
	
	
	==> storage-provisioner [c00ae0a7c53b38c6eac3d76a7f59448c9a2d7b83553cd419b403f69d70cbc2fd] <==
	I1009 20:16:57.177345       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 20:17:27.217824       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cb650f367de61c5e442498f548a4c14a5e49c4b9f22fdd5267e068a8e42bee89] <==
	I1009 20:17:27.941186       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:17:27.955620       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:17:27.955743       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 20:17:45.366032       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:17:45.366272       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-670649_d00cb327-b85d-4951-9c59-9578e5c0cbb4!
	I1009 20:17:45.372275       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1ca381e3-6bc7-4716-b803-0241acff8a2f", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-670649_d00cb327-b85d-4951-9c59-9578e5c0cbb4 became leader
	I1009 20:17:45.466849       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-670649_d00cb327-b85d-4951-9c59-9578e5c0cbb4!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-670649 -n old-k8s-version-670649
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-670649 -n old-k8s-version-670649: exit status 2 (417.720496ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-670649 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-670649
helpers_test.go:243: (dbg) docker inspect old-k8s-version-670649:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d",
	        "Created": "2025-10-09T20:15:15.014520334Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 481515,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:16:38.13820366Z",
	            "FinishedAt": "2025-10-09T20:16:35.507324918Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/hostname",
	        "HostsPath": "/var/lib/docker/containers/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/hosts",
	        "LogPath": "/var/lib/docker/containers/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d-json.log",
	        "Name": "/old-k8s-version-670649",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-670649:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-670649",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d",
	                "LowerDir": "/var/lib/docker/overlay2/27381f7e732d8c7d661645d8c8bce4a7b4487d7ccc8446c8ec75884f80dfc2aa-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/27381f7e732d8c7d661645d8c8bce4a7b4487d7ccc8446c8ec75884f80dfc2aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/27381f7e732d8c7d661645d8c8bce4a7b4487d7ccc8446c8ec75884f80dfc2aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/27381f7e732d8c7d661645d8c8bce4a7b4487d7ccc8446c8ec75884f80dfc2aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-670649",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-670649/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-670649",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-670649",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-670649",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b9d636cca443fc40e683831175ed0fd35e707a8bee5a5ea62739b2547fd638cb",
	            "SandboxKey": "/var/run/docker/netns/b9d636cca443",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-670649": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:3a:6e:86:87:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f71ce8c90e918d3740f414c21f48298da6003535f949f572c810d48866acbdf",
	                    "EndpointID": "6c4b1f01efcfbc59c1dd4dd21971719b3bdf1fa4db2bee28353047e490d333cf",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-670649",
	                        "242f5a73bf34"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-670649 -n old-k8s-version-670649
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-670649 -n old-k8s-version-670649: exit status 2 (365.969216ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-670649 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-670649 logs -n 25: (1.45143703s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-535911 sudo crio config                                                                                                                                                                                                             │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ delete  │ -p cilium-535911                                                                                                                                                                                                                              │ cilium-535911             │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │ 09 Oct 25 20:05 UTC │
	│ start   │ -p force-systemd-env-242564 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-242564  │ jenkins │ v1.37.0 │ 09 Oct 25 20:05 UTC │                     │
	│ ssh     │ force-systemd-flag-736218 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-736218 │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	│ delete  │ -p force-systemd-flag-736218                                                                                                                                                                                                                  │ force-systemd-flag-736218 │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	│ start   │ -p cert-expiration-282540 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-282540    │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	│ delete  │ -p force-systemd-env-242564                                                                                                                                                                                                                   │ force-systemd-env-242564  │ jenkins │ v1.37.0 │ 09 Oct 25 20:14 UTC │ 09 Oct 25 20:14 UTC │
	│ start   │ -p cert-options-038875 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-038875       │ jenkins │ v1.37.0 │ 09 Oct 25 20:14 UTC │ 09 Oct 25 20:15 UTC │
	│ ssh     │ cert-options-038875 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-038875       │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ ssh     │ -p cert-options-038875 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-038875       │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ delete  │ -p cert-options-038875                                                                                                                                                                                                                        │ cert-options-038875       │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ start   │ -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p cert-expiration-282540 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-282540    │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:16 UTC │
	│ delete  │ -p cert-expiration-282540                                                                                                                                                                                                                     │ cert-expiration-282540    │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020313         │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:17 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-670649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │                     │
	│ stop    │ -p old-k8s-version-670649 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-670649 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-020313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020313         │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	│ stop    │ -p no-preload-020313 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-020313         │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ image   │ old-k8s-version-670649 image list --format=json                                                                                                                                                                                               │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ pause   │ -p old-k8s-version-670649 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-670649    │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-020313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-020313         │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ start   │ -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020313         │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:17:51
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:17:51.705859  485563 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:17:51.706105  485563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:17:51.706132  485563 out.go:374] Setting ErrFile to fd 2...
	I1009 20:17:51.706150  485563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:17:51.706475  485563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:17:51.706929  485563 out.go:368] Setting JSON to false
	I1009 20:17:51.714177  485563 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10811,"bootTime":1760030261,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:17:51.714312  485563 start.go:143] virtualization:  
	I1009 20:17:51.718069  485563 out.go:179] * [no-preload-020313] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:17:51.721947  485563 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:17:51.722177  485563 notify.go:221] Checking for updates...
	I1009 20:17:51.728340  485563 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:17:51.731309  485563 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:17:51.734363  485563 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:17:51.737344  485563 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:17:51.740394  485563 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:17:51.743903  485563 config.go:182] Loaded profile config "no-preload-020313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:17:51.744541  485563 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:17:51.766637  485563 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:17:51.766844  485563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:17:51.873522  485563 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:17:51.861750759 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:17:51.873639  485563 docker.go:319] overlay module found
	I1009 20:17:51.876851  485563 out.go:179] * Using the docker driver based on existing profile
	I1009 20:17:51.879721  485563 start.go:309] selected driver: docker
	I1009 20:17:51.879743  485563 start.go:930] validating driver "docker" against &{Name:no-preload-020313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020313 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:17:51.879843  485563 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:17:51.880590  485563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:17:51.985418  485563 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:17:51.973012117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:17:51.985769  485563 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:17:51.985795  485563 cni.go:84] Creating CNI manager for ""
	I1009 20:17:51.985859  485563 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:17:51.985894  485563 start.go:353] cluster config:
	{Name:no-preload-020313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:17:51.989277  485563 out.go:179] * Starting "no-preload-020313" primary control-plane node in "no-preload-020313" cluster
	I1009 20:17:51.992697  485563 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:17:51.996456  485563 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:17:51.999912  485563 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:17:52.000090  485563 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/config.json ...
	I1009 20:17:52.000471  485563 cache.go:107] acquiring lock: {Name:mk067853efdb9d5dfe210e9bdb60a1140d344bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.000573  485563 cache.go:115] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1009 20:17:52.000588  485563 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 136.543µs
	I1009 20:17:52.000610  485563 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1009 20:17:52.000626  485563 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:17:52.000869  485563 cache.go:107] acquiring lock: {Name:mk549023c9da29243b6f2f23c58ca3df426147a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.000951  485563 cache.go:115] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1009 20:17:52.000961  485563 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 98.594µs
	I1009 20:17:52.000968  485563 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1009 20:17:52.000981  485563 cache.go:107] acquiring lock: {Name:mk9525a25fb678d6580f1eb602de12141a8b59a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.001012  485563 cache.go:115] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1009 20:17:52.001028  485563 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 38.335µs
	I1009 20:17:52.001102  485563 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1009 20:17:52.001153  485563 cache.go:107] acquiring lock: {Name:mk65f6488cbc08e9947528f7f60d66925e264a10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.001197  485563 cache.go:115] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1009 20:17:52.001202  485563 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 52.169µs
	I1009 20:17:52.001208  485563 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1009 20:17:52.001218  485563 cache.go:107] acquiring lock: {Name:mkef8cd450b6ec8be1600cd17c6da55958b25391 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.001246  485563 cache.go:115] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1009 20:17:52.001337  485563 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 119.501µs
	I1009 20:17:52.001346  485563 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1009 20:17:52.001362  485563 cache.go:107] acquiring lock: {Name:mkd217de9f557eca101e9a8593531ca54ad0485b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.001413  485563 cache.go:115] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1009 20:17:52.001419  485563 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 59.513µs
	I1009 20:17:52.001425  485563 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1009 20:17:52.001435  485563 cache.go:107] acquiring lock: {Name:mkac1bf7d8d221e16de37f34c6c9a23b671148bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.001463  485563 cache.go:115] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1009 20:17:52.001468  485563 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 35.586µs
	I1009 20:17:52.001474  485563 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1009 20:17:52.001484  485563 cache.go:107] acquiring lock: {Name:mkd5d0f835b5a82fe0ea91a553ed69cdedb24993 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.001512  485563 cache.go:115] /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1009 20:17:52.001516  485563 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 33.355µs
	I1009 20:17:52.001522  485563 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1009 20:17:52.001530  485563 cache.go:87] Successfully saved all images to host disk.
	I1009 20:17:52.038396  485563 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:17:52.038418  485563 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:17:52.038435  485563 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:17:52.038458  485563 start.go:361] acquireMachinesLock for no-preload-020313: {Name:mkd16c652d3af42b77740f1793cec5d9870abaca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:17:52.038518  485563 start.go:365] duration metric: took 43.012µs to acquireMachinesLock for "no-preload-020313"
	I1009 20:17:52.038546  485563 start.go:97] Skipping create...Using existing machine configuration
	I1009 20:17:52.038554  485563 fix.go:55] fixHost starting: 
	I1009 20:17:52.038817  485563 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Status}}
	I1009 20:17:52.068678  485563 fix.go:113] recreateIfNeeded on no-preload-020313: state=Stopped err=<nil>
	W1009 20:17:52.068725  485563 fix.go:139] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.570698259Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.573951565Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.573987643Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.574011882Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.578075948Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.578113561Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.578137733Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.581721637Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.581792399Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.581815439Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.585079962Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.585141304Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.623981555Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=28630fd2-fe1b-4a73-bd2f-45a2cfd709cf name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.629433333Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=eca116de-284a-4015-aa3c-06cc2dacb04e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.633458252Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd/dashboard-metrics-scraper" id=198e173f-8d37-4b11-ab94-8872f5a6ad7c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.633955687Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.651227285Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.655594831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.675888372Z" level=info msg="Created container a71c866e69c210bdbe0a0dd45682a1800e2e4a640ba2d08c75b017add09c2faf: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd/dashboard-metrics-scraper" id=198e173f-8d37-4b11-ab94-8872f5a6ad7c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.682072977Z" level=info msg="Starting container: a71c866e69c210bdbe0a0dd45682a1800e2e4a640ba2d08c75b017add09c2faf" id=4637fc46-64fa-4df4-946c-b902558f235d name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.685689825Z" level=info msg="Started container" PID=1700 containerID=a71c866e69c210bdbe0a0dd45682a1800e2e4a640ba2d08c75b017add09c2faf description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd/dashboard-metrics-scraper id=4637fc46-64fa-4df4-946c-b902558f235d name=/runtime.v1.RuntimeService/StartContainer sandboxID=f50acb1ed89e669a2317f2d7df5b34e347b3f6eb90bf162677952070aa3c568a
	Oct 09 20:17:36 old-k8s-version-670649 conmon[1698]: conmon a71c866e69c210bdbe0a <ninfo>: container 1700 exited with status 1
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.85864975Z" level=info msg="Removing container: 6ed7fe604c072156684da501c2834d007b87c77764f5fe155f964fb6d5c7099d" id=c19a348a-84a9-4b3b-b26a-0f3a15ccaba1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.88192749Z" level=info msg="Error loading conmon cgroup of container 6ed7fe604c072156684da501c2834d007b87c77764f5fe155f964fb6d5c7099d: cgroup deleted" id=c19a348a-84a9-4b3b-b26a-0f3a15ccaba1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 20:17:36 old-k8s-version-670649 crio[652]: time="2025-10-09T20:17:36.887138641Z" level=info msg="Removed container 6ed7fe604c072156684da501c2834d007b87c77764f5fe155f964fb6d5c7099d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd/dashboard-metrics-scraper" id=c19a348a-84a9-4b3b-b26a-0f3a15ccaba1 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	a71c866e69c21       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago       Exited              dashboard-metrics-scraper   2                   f50acb1ed89e6       dashboard-metrics-scraper-5f989dc9cf-2jqxd       kubernetes-dashboard
	cb650f367de61       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   ffa005d21ceb5       storage-provisioner                              kube-system
	e8df8a1bb1ca7       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago       Running             kubernetes-dashboard        0                   66d20e9b28f26       kubernetes-dashboard-8694d4445c-pv4kt            kubernetes-dashboard
	5133146e72c53       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   ce1b38c84a89c       busybox                                          default
	d5ab970b7a5ec       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           59 seconds ago       Running             coredns                     1                   be7df4ec58dc0       coredns-5dd5756b68-kz799                         kube-system
	c00ae0a7c53b3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   ffa005d21ceb5       storage-provisioner                              kube-system
	f937ff34590cb       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           59 seconds ago       Running             kube-proxy                  1                   bd4432cbf4b5c       kube-proxy-fffc5                                 kube-system
	5311b03994768       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   ce48f60386d6c       kindnet-4nzl2                                    kube-system
	35d9539ca66de       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   57207cac49537       etcd-old-k8s-version-670649                      kube-system
	2dbcc3dbc3674       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   a9c1abda9be01       kube-controller-manager-old-k8s-version-670649   kube-system
	dbb7cba7d5da3       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   b9726c0874c26       kube-apiserver-old-k8s-version-670649            kube-system
	c615280026154       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   8022b4eb694d1       kube-scheduler-old-k8s-version-670649            kube-system
	
	
	==> coredns [d5ab970b7a5ec7cb60f4d5d6366e178aef08471353149dd93fed3173338875b5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36565 - 26139 "HINFO IN 6610836821449462148.1132489493451489728. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020811955s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-670649
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-670649
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=old-k8s-version-670649
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T20_15_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 20:15:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-670649
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 20:17:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 20:17:46 +0000   Thu, 09 Oct 2025 20:15:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 20:17:46 +0000   Thu, 09 Oct 2025 20:15:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 20:17:46 +0000   Thu, 09 Oct 2025 20:15:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 20:17:46 +0000   Thu, 09 Oct 2025 20:16:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-670649
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 520117eb2c614e32ba12562d1c4a855b
	  System UUID:                d2088d50-dda3-441d-a1ce-e5d6a3366421
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 coredns-5dd5756b68-kz799                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m2s
	  kube-system                 etcd-old-k8s-version-670649                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m14s
	  kube-system                 kindnet-4nzl2                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m2s
	  kube-system                 kube-apiserver-old-k8s-version-670649             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-controller-manager-old-k8s-version-670649    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-proxy-fffc5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-scheduler-old-k8s-version-670649             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-2jqxd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-pv4kt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m1s                   kube-proxy       
	  Normal  Starting                 57s                    kube-proxy       
	  Normal  Starting                 2m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m22s (x8 over 2m22s)  kubelet          Node old-k8s-version-670649 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m22s (x8 over 2m22s)  kubelet          Node old-k8s-version-670649 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m22s (x8 over 2m22s)  kubelet          Node old-k8s-version-670649 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m15s                  kubelet          Node old-k8s-version-670649 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m15s                  kubelet          Node old-k8s-version-670649 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m15s                  kubelet          Node old-k8s-version-670649 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m15s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m3s                   node-controller  Node old-k8s-version-670649 event: Registered Node old-k8s-version-670649 in Controller
	  Normal  NodeReady                108s                   kubelet          Node old-k8s-version-670649 status is now: NodeReady
	  Normal  Starting                 70s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node old-k8s-version-670649 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node old-k8s-version-670649 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s (x8 over 70s)      kubelet          Node old-k8s-version-670649 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                    node-controller  Node old-k8s-version-670649 event: Registered Node old-k8s-version-670649 in Controller
	
	
	==> dmesg <==
	[Oct 9 19:45] overlayfs: idmapped layers are currently not supported
	[ +36.012100] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:47] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:50] overlayfs: idmapped layers are currently not supported
	[ +27.967875] overlayfs: idmapped layers are currently not supported
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:04] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:15] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:16] overlayfs: idmapped layers are currently not supported
	[ +23.810739] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [35d9539ca66deeeb7d54f441e4e9faa5be578cf1ccc3e88b622d80472a21a3aa] <==
	{"level":"info","ts":"2025-10-09T20:16:46.567983Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-09T20:16:46.568382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-09T20:16:46.568456Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-09T20:16:46.568547Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T20:16:46.568582Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-09T20:16:46.57024Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-09T20:16:46.570436Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-09T20:16:46.570477Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-09T20:16:46.570561Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-09T20:16:46.570568Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-09T20:16:47.593203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-09T20:16:47.59325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-09T20:16:47.593266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-09T20:16:47.59328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-09T20:16:47.593287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-09T20:16:47.593297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-09T20:16:47.593305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-09T20:16:47.601411Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-670649 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-09T20:16:47.601616Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-09T20:16:47.602581Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-09T20:16:47.60263Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-09T20:16:47.603403Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-09T20:16:47.603468Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-09T20:16:47.603483Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-09T20:16:56.388529Z","caller":"traceutil/trace.go:171","msg":"trace[488274790] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"104.530086ms","start":"2025-10-09T20:16:56.283982Z","end":"2025-10-09T20:16:56.388512Z","steps":["trace[488274790] 'process raft request'  (duration: 94.490382ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:17:55 up  3:00,  0 user,  load average: 2.25, 1.67, 1.60
	Linux old-k8s-version-670649 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5311b03994768a53d0ae1759640177709434e395aecd0e575ce30445dd93333a] <==
	I1009 20:16:56.347881       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:16:56.365253       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1009 20:16:56.365387       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:16:56.365400       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:16:56.365415       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:16:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:16:56.566678       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:16:56.566708       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:16:56.566717       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:16:56.567496       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 20:17:26.569198       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1009 20:17:26.569389       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 20:17:26.569475       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 20:17:26.569500       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1009 20:17:28.166832       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 20:17:28.167590       1 metrics.go:72] Registering metrics
	I1009 20:17:28.167672       1 controller.go:711] "Syncing nftables rules"
	I1009 20:17:36.565466       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 20:17:36.565550       1 main.go:301] handling current node
	I1009 20:17:46.565421       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 20:17:46.565454       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dbb7cba7d5da37c54a17588da4d76f5d70497f3e32a6e495c433bd46fb90292a] <==
	I1009 20:16:54.833050       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 20:16:54.850060       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1009 20:16:54.885735       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1009 20:16:54.885766       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1009 20:16:54.885978       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1009 20:16:54.886024       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 20:16:54.895295       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1009 20:16:54.895346       1 shared_informer.go:318] Caches are synced for configmaps
	I1009 20:16:54.895392       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1009 20:16:54.905312       1 aggregator.go:166] initial CRD sync complete...
	I1009 20:16:54.905337       1 autoregister_controller.go:141] Starting autoregister controller
	I1009 20:16:54.905345       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 20:16:54.905351       1 cache.go:39] Caches are synced for autoregister controller
	E1009 20:16:55.077475       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 20:16:55.418244       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:16:59.049162       1 controller.go:624] quota admission added evaluator for: namespaces
	I1009 20:16:59.112238       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1009 20:16:59.161553       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:16:59.175867       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:16:59.186912       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1009 20:16:59.260895       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.24.227"}
	I1009 20:16:59.299477       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.10.121"}
	I1009 20:17:09.621662       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 20:17:09.927196       1 controller.go:624] quota admission added evaluator for: endpoints
	I1009 20:17:10.079628       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2dbcc3dbc3674682da2fd59a5223bfdbb8dff89e8f24cc4606b92f04b8486139] <==
	I1009 20:17:09.988700       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.435µs"
	I1009 20:17:10.099935       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1009 20:17:10.104880       1 shared_informer.go:318] Caches are synced for garbage collector
	I1009 20:17:10.105460       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1009 20:17:10.114959       1 shared_informer.go:318] Caches are synced for garbage collector
	I1009 20:17:10.115003       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1009 20:17:10.119729       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-2jqxd"
	I1009 20:17:10.133097       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-pv4kt"
	I1009 20:17:10.173420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="85.171253ms"
	I1009 20:17:10.186397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.302019ms"
	I1009 20:17:10.235558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.107206ms"
	I1009 20:17:10.239559       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="1.727147ms"
	I1009 20:17:10.261539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.499µs"
	I1009 20:17:10.295991       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="122.43494ms"
	I1009 20:17:10.296192       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="83.603µs"
	I1009 20:17:10.300169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="54.277µs"
	I1009 20:17:16.809896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.355µs"
	I1009 20:17:17.827339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.647µs"
	I1009 20:17:18.827954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.213µs"
	I1009 20:17:21.854083       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="23.455476ms"
	I1009 20:17:21.854222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.754µs"
	I1009 20:17:35.854349       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.263675ms"
	I1009 20:17:35.854756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.596µs"
	I1009 20:17:36.880443       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.223µs"
	I1009 20:17:41.970850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.459µs"
	
	
	==> kube-proxy [f937ff34590cb60d397ac7c36418ba0efc5150992a944bf4f950e6e18660bffa] <==
	I1009 20:16:57.677329       1 server_others.go:69] "Using iptables proxy"
	I1009 20:16:57.798535       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1009 20:16:58.086495       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:16:58.095226       1 server_others.go:152] "Using iptables Proxier"
	I1009 20:16:58.095330       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1009 20:16:58.095364       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1009 20:16:58.095423       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1009 20:16:58.095657       1 server.go:846] "Version info" version="v1.28.0"
	I1009 20:16:58.095877       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:16:58.096636       1 config.go:188] "Starting service config controller"
	I1009 20:16:58.096737       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1009 20:16:58.096784       1 config.go:97] "Starting endpoint slice config controller"
	I1009 20:16:58.096810       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1009 20:16:58.101565       1 config.go:315] "Starting node config controller"
	I1009 20:16:58.102727       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1009 20:16:58.197695       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1009 20:16:58.197790       1 shared_informer.go:318] Caches are synced for service config
	I1009 20:16:58.213875       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [c615280026154e697494663ebf653ff25eb7cef14b02ea4bc2dce85a23e792fd] <==
	I1009 20:16:51.674180       1 serving.go:348] Generated self-signed cert in-memory
	I1009 20:16:57.261704       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1009 20:16:57.261734       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:16:57.299797       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1009 20:16:57.299923       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1009 20:16:57.299937       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1009 20:16:57.299951       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1009 20:16:57.319228       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:16:57.319331       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 20:16:57.319374       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:16:57.323872       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1009 20:16:57.445227       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 20:16:57.501505       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1009 20:16:57.524598       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 09 20:17:11 old-k8s-version-670649 kubelet[779]: E1009 20:17:11.331209     779 projected.go:198] Error preparing data for projected volume kube-api-access-kd9g4 for pod kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pv4kt: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:17:11 old-k8s-version-670649 kubelet[779]: E1009 20:17:11.333460     779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/475f099e-d888-4a61-99b6-f4998e50936f-kube-api-access-9n2q6 podName:475f099e-d888-4a61-99b6-f4998e50936f nodeName:}" failed. No retries permitted until 2025-10-09 20:17:11.832722681 +0000 UTC m=+26.528121130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9n2q6" (UniqueName: "kubernetes.io/projected/475f099e-d888-4a61-99b6-f4998e50936f-kube-api-access-9n2q6") pod "dashboard-metrics-scraper-5f989dc9cf-2jqxd" (UID: "475f099e-d888-4a61-99b6-f4998e50936f") : failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:17:11 old-k8s-version-670649 kubelet[779]: E1009 20:17:11.333531     779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a62b0cc0-36f9-44f2-96c7-87aed1665f8d-kube-api-access-kd9g4 podName:a62b0cc0-36f9-44f2-96c7-87aed1665f8d nodeName:}" failed. No retries permitted until 2025-10-09 20:17:11.833507218 +0000 UTC m=+26.528905618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kd9g4" (UniqueName: "kubernetes.io/projected/a62b0cc0-36f9-44f2-96c7-87aed1665f8d-kube-api-access-kd9g4") pod "kubernetes-dashboard-8694d4445c-pv4kt" (UID: "a62b0cc0-36f9-44f2-96c7-87aed1665f8d") : failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:17:11 old-k8s-version-670649 kubelet[779]: W1009 20:17:11.995167     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/crio-f50acb1ed89e669a2317f2d7df5b34e347b3f6eb90bf162677952070aa3c568a WatchSource:0}: Error finding container f50acb1ed89e669a2317f2d7df5b34e347b3f6eb90bf162677952070aa3c568a: Status 404 returned error can't find the container with id f50acb1ed89e669a2317f2d7df5b34e347b3f6eb90bf162677952070aa3c568a
	Oct 09 20:17:12 old-k8s-version-670649 kubelet[779]: W1009 20:17:12.021498     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/242f5a73bf3408c78204127e16255d5d302b161639419f815a7a343ee83b928d/crio-66d20e9b28f262feb5c47848ca91efccce0a159344f085dba649e04d7fc7dcd2 WatchSource:0}: Error finding container 66d20e9b28f262feb5c47848ca91efccce0a159344f085dba649e04d7fc7dcd2: Status 404 returned error can't find the container with id 66d20e9b28f262feb5c47848ca91efccce0a159344f085dba649e04d7fc7dcd2
	Oct 09 20:17:16 old-k8s-version-670649 kubelet[779]: I1009 20:17:16.795300     779 scope.go:117] "RemoveContainer" containerID="126370ef452b90fed472d5d1ac8802bfdd638bc5cec858657c8691433a06ce6d"
	Oct 09 20:17:17 old-k8s-version-670649 kubelet[779]: I1009 20:17:17.802729     779 scope.go:117] "RemoveContainer" containerID="6ed7fe604c072156684da501c2834d007b87c77764f5fe155f964fb6d5c7099d"
	Oct 09 20:17:17 old-k8s-version-670649 kubelet[779]: E1009 20:17:17.803010     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2jqxd_kubernetes-dashboard(475f099e-d888-4a61-99b6-f4998e50936f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd" podUID="475f099e-d888-4a61-99b6-f4998e50936f"
	Oct 09 20:17:17 old-k8s-version-670649 kubelet[779]: I1009 20:17:17.810876     779 scope.go:117] "RemoveContainer" containerID="126370ef452b90fed472d5d1ac8802bfdd638bc5cec858657c8691433a06ce6d"
	Oct 09 20:17:18 old-k8s-version-670649 kubelet[779]: I1009 20:17:18.805725     779 scope.go:117] "RemoveContainer" containerID="6ed7fe604c072156684da501c2834d007b87c77764f5fe155f964fb6d5c7099d"
	Oct 09 20:17:18 old-k8s-version-670649 kubelet[779]: E1009 20:17:18.806005     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2jqxd_kubernetes-dashboard(475f099e-d888-4a61-99b6-f4998e50936f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd" podUID="475f099e-d888-4a61-99b6-f4998e50936f"
	Oct 09 20:17:21 old-k8s-version-670649 kubelet[779]: I1009 20:17:21.956044     779 scope.go:117] "RemoveContainer" containerID="6ed7fe604c072156684da501c2834d007b87c77764f5fe155f964fb6d5c7099d"
	Oct 09 20:17:21 old-k8s-version-670649 kubelet[779]: E1009 20:17:21.956855     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2jqxd_kubernetes-dashboard(475f099e-d888-4a61-99b6-f4998e50936f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd" podUID="475f099e-d888-4a61-99b6-f4998e50936f"
	Oct 09 20:17:27 old-k8s-version-670649 kubelet[779]: I1009 20:17:27.830741     779 scope.go:117] "RemoveContainer" containerID="c00ae0a7c53b38c6eac3d76a7f59448c9a2d7b83553cd419b403f69d70cbc2fd"
	Oct 09 20:17:27 old-k8s-version-670649 kubelet[779]: I1009 20:17:27.899971     779 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pv4kt" podStartSLOduration=8.278069781 podCreationTimestamp="2025-10-09 20:17:10 +0000 UTC" firstStartedPulling="2025-10-09 20:17:12.03108538 +0000 UTC m=+26.726483780" lastFinishedPulling="2025-10-09 20:17:21.651545865 +0000 UTC m=+36.346944264" observedRunningTime="2025-10-09 20:17:21.829165875 +0000 UTC m=+36.524564299" watchObservedRunningTime="2025-10-09 20:17:27.898530265 +0000 UTC m=+42.593928673"
	Oct 09 20:17:36 old-k8s-version-670649 kubelet[779]: I1009 20:17:36.623374     779 scope.go:117] "RemoveContainer" containerID="6ed7fe604c072156684da501c2834d007b87c77764f5fe155f964fb6d5c7099d"
	Oct 09 20:17:36 old-k8s-version-670649 kubelet[779]: I1009 20:17:36.853077     779 scope.go:117] "RemoveContainer" containerID="6ed7fe604c072156684da501c2834d007b87c77764f5fe155f964fb6d5c7099d"
	Oct 09 20:17:36 old-k8s-version-670649 kubelet[779]: I1009 20:17:36.853973     779 scope.go:117] "RemoveContainer" containerID="a71c866e69c210bdbe0a0dd45682a1800e2e4a640ba2d08c75b017add09c2faf"
	Oct 09 20:17:36 old-k8s-version-670649 kubelet[779]: E1009 20:17:36.854247     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2jqxd_kubernetes-dashboard(475f099e-d888-4a61-99b6-f4998e50936f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd" podUID="475f099e-d888-4a61-99b6-f4998e50936f"
	Oct 09 20:17:41 old-k8s-version-670649 kubelet[779]: I1009 20:17:41.955922     779 scope.go:117] "RemoveContainer" containerID="a71c866e69c210bdbe0a0dd45682a1800e2e4a640ba2d08c75b017add09c2faf"
	Oct 09 20:17:41 old-k8s-version-670649 kubelet[779]: E1009 20:17:41.956794     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2jqxd_kubernetes-dashboard(475f099e-d888-4a61-99b6-f4998e50936f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2jqxd" podUID="475f099e-d888-4a61-99b6-f4998e50936f"
	Oct 09 20:17:49 old-k8s-version-670649 kubelet[779]: I1009 20:17:49.800216     779 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 09 20:17:49 old-k8s-version-670649 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 20:17:49 old-k8s-version-670649 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 20:17:49 old-k8s-version-670649 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e8df8a1bb1ca7ae3c4afd2076b94df79858ba48bc7832eecd672642171f287c3] <==
	2025/10/09 20:17:21 Starting overwatch
	2025/10/09 20:17:21 Using namespace: kubernetes-dashboard
	2025/10/09 20:17:21 Using in-cluster config to connect to apiserver
	2025/10/09 20:17:21 Using secret token for csrf signing
	2025/10/09 20:17:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/09 20:17:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/09 20:17:21 Successful initial request to the apiserver, version: v1.28.0
	2025/10/09 20:17:21 Generating JWE encryption key
	2025/10/09 20:17:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/09 20:17:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/09 20:17:21 Initializing JWE encryption key from synchronized object
	2025/10/09 20:17:21 Creating in-cluster Sidecar client
	2025/10/09 20:17:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 20:17:21 Serving insecurely on HTTP port: 9090
	2025/10/09 20:17:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [c00ae0a7c53b38c6eac3d76a7f59448c9a2d7b83553cd419b403f69d70cbc2fd] <==
	I1009 20:16:57.177345       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 20:17:27.217824       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cb650f367de61c5e442498f548a4c14a5e49c4b9f22fdd5267e068a8e42bee89] <==
	I1009 20:17:27.941186       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:17:27.955620       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:17:27.955743       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 20:17:45.366032       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:17:45.366272       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-670649_d00cb327-b85d-4951-9c59-9578e5c0cbb4!
	I1009 20:17:45.372275       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1ca381e3-6bc7-4716-b803-0241acff8a2f", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-670649_d00cb327-b85d-4951-9c59-9578e5c0cbb4 became leader
	I1009 20:17:45.466849       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-670649_d00cb327-b85d-4951-9c59-9578e5c0cbb4!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-670649 -n old-k8s-version-670649
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-670649 -n old-k8s-version-670649: exit status 2 (461.241694ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-670649 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-020313 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-020313 --alsologtostderr -v=1: exit status 80 (2.347475816s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-020313 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 20:18:57.139229  491322 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:18:57.139446  491322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:18:57.139478  491322 out.go:374] Setting ErrFile to fd 2...
	I1009 20:18:57.139497  491322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:18:57.139778  491322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:18:57.140082  491322 out.go:368] Setting JSON to false
	I1009 20:18:57.140135  491322 mustload.go:65] Loading cluster: no-preload-020313
	I1009 20:18:57.140639  491322 config.go:182] Loaded profile config "no-preload-020313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:18:57.141261  491322 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Status}}
	I1009 20:18:57.161344  491322 host.go:66] Checking if "no-preload-020313" exists ...
	I1009 20:18:57.161673  491322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:18:57.227559  491322 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 20:18:57.218121027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:18:57.228215  491322 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-020313 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1009 20:18:57.229674  491322 out.go:179] * Pausing node no-preload-020313 ... 
	I1009 20:18:57.230987  491322 host.go:66] Checking if "no-preload-020313" exists ...
	I1009 20:18:57.231412  491322 ssh_runner.go:195] Run: systemctl --version
	I1009 20:18:57.231472  491322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:18:57.249061  491322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/no-preload-020313/id_rsa Username:docker}
	I1009 20:18:57.360134  491322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:18:57.382717  491322 pause.go:52] kubelet running: true
	I1009 20:18:57.382799  491322 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:18:57.624527  491322 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:18:57.624625  491322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:18:57.702193  491322 cri.go:89] found id: "3d32dbce2cc613f987b4189a71d62e455a6390b309ef522069aa954c7269e07b"
	I1009 20:18:57.702213  491322 cri.go:89] found id: "d6b7ee85aeefababe2c083f6e0a8cd0dc31cd7c5844cb95bf3b217fc2272910f"
	I1009 20:18:57.702218  491322 cri.go:89] found id: "cfac8e5ac3da24e22eb9c6cef2647c4b3078ab69fc092c7b1a73d4bc627d2f52"
	I1009 20:18:57.702222  491322 cri.go:89] found id: "0442d50e4e3961eb21b5a12dda29ff9aea11f015d76a75f5fc6d85fbecaab975"
	I1009 20:18:57.702225  491322 cri.go:89] found id: "042d3009a6505b38db3a5645a55f6992d1b6ef9254086f64eef6f0621cff64c8"
	I1009 20:18:57.702229  491322 cri.go:89] found id: "22b87e577d7a8f108e7d77d095e44d5b3392e21fb7da8260fe838b3e930b2229"
	I1009 20:18:57.702233  491322 cri.go:89] found id: "5abd9717aed8a5baaa24ce4dbac3f6a6652f3d3b84cb43dc09007beee7a84423"
	I1009 20:18:57.702236  491322 cri.go:89] found id: "bdcbfecca01ea6e3e0ee392800df2ec67f04ed687955da27cce3925008d3bc5a"
	I1009 20:18:57.702249  491322 cri.go:89] found id: "d49e0cc690dcab668ca06327548e322b4d012301c7ad96444959726efbca4e09"
	I1009 20:18:57.702261  491322 cri.go:89] found id: "874aa1307bd23de55196930ba25ea04fa85d47795dbd099fb33715b82b0ca793"
	I1009 20:18:57.702270  491322 cri.go:89] found id: "fe7a54433b35090ac26deba9c9b4e3b51e7532d0b41a463ca4aa4968c8781c7f"
	I1009 20:18:57.702273  491322 cri.go:89] found id: ""
	I1009 20:18:57.702330  491322 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:18:57.714077  491322 retry.go:31] will retry after 287.2381ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:18:57Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:18:58.002304  491322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:18:58.024836  491322 pause.go:52] kubelet running: false
	I1009 20:18:58.024902  491322 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:18:58.210293  491322 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:18:58.210411  491322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:18:58.275914  491322 cri.go:89] found id: "3d32dbce2cc613f987b4189a71d62e455a6390b309ef522069aa954c7269e07b"
	I1009 20:18:58.275939  491322 cri.go:89] found id: "d6b7ee85aeefababe2c083f6e0a8cd0dc31cd7c5844cb95bf3b217fc2272910f"
	I1009 20:18:58.275944  491322 cri.go:89] found id: "cfac8e5ac3da24e22eb9c6cef2647c4b3078ab69fc092c7b1a73d4bc627d2f52"
	I1009 20:18:58.275948  491322 cri.go:89] found id: "0442d50e4e3961eb21b5a12dda29ff9aea11f015d76a75f5fc6d85fbecaab975"
	I1009 20:18:58.275951  491322 cri.go:89] found id: "042d3009a6505b38db3a5645a55f6992d1b6ef9254086f64eef6f0621cff64c8"
	I1009 20:18:58.275954  491322 cri.go:89] found id: "22b87e577d7a8f108e7d77d095e44d5b3392e21fb7da8260fe838b3e930b2229"
	I1009 20:18:58.275957  491322 cri.go:89] found id: "5abd9717aed8a5baaa24ce4dbac3f6a6652f3d3b84cb43dc09007beee7a84423"
	I1009 20:18:58.275960  491322 cri.go:89] found id: "bdcbfecca01ea6e3e0ee392800df2ec67f04ed687955da27cce3925008d3bc5a"
	I1009 20:18:58.275963  491322 cri.go:89] found id: "d49e0cc690dcab668ca06327548e322b4d012301c7ad96444959726efbca4e09"
	I1009 20:18:58.275969  491322 cri.go:89] found id: "874aa1307bd23de55196930ba25ea04fa85d47795dbd099fb33715b82b0ca793"
	I1009 20:18:58.275972  491322 cri.go:89] found id: "fe7a54433b35090ac26deba9c9b4e3b51e7532d0b41a463ca4aa4968c8781c7f"
	I1009 20:18:58.275975  491322 cri.go:89] found id: ""
	I1009 20:18:58.276023  491322 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:18:58.288243  491322 retry.go:31] will retry after 199.582919ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:18:58Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:18:58.488563  491322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:18:58.502949  491322 pause.go:52] kubelet running: false
	I1009 20:18:58.503017  491322 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:18:58.688138  491322 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:18:58.688218  491322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:18:58.784427  491322 cri.go:89] found id: "3d32dbce2cc613f987b4189a71d62e455a6390b309ef522069aa954c7269e07b"
	I1009 20:18:58.784457  491322 cri.go:89] found id: "d6b7ee85aeefababe2c083f6e0a8cd0dc31cd7c5844cb95bf3b217fc2272910f"
	I1009 20:18:58.784464  491322 cri.go:89] found id: "cfac8e5ac3da24e22eb9c6cef2647c4b3078ab69fc092c7b1a73d4bc627d2f52"
	I1009 20:18:58.784467  491322 cri.go:89] found id: "0442d50e4e3961eb21b5a12dda29ff9aea11f015d76a75f5fc6d85fbecaab975"
	I1009 20:18:58.784471  491322 cri.go:89] found id: "042d3009a6505b38db3a5645a55f6992d1b6ef9254086f64eef6f0621cff64c8"
	I1009 20:18:58.784474  491322 cri.go:89] found id: "22b87e577d7a8f108e7d77d095e44d5b3392e21fb7da8260fe838b3e930b2229"
	I1009 20:18:58.784514  491322 cri.go:89] found id: "5abd9717aed8a5baaa24ce4dbac3f6a6652f3d3b84cb43dc09007beee7a84423"
	I1009 20:18:58.784518  491322 cri.go:89] found id: "bdcbfecca01ea6e3e0ee392800df2ec67f04ed687955da27cce3925008d3bc5a"
	I1009 20:18:58.784522  491322 cri.go:89] found id: "d49e0cc690dcab668ca06327548e322b4d012301c7ad96444959726efbca4e09"
	I1009 20:18:58.784528  491322 cri.go:89] found id: "874aa1307bd23de55196930ba25ea04fa85d47795dbd099fb33715b82b0ca793"
	I1009 20:18:58.784536  491322 cri.go:89] found id: "fe7a54433b35090ac26deba9c9b4e3b51e7532d0b41a463ca4aa4968c8781c7f"
	I1009 20:18:58.784539  491322 cri.go:89] found id: ""
	I1009 20:18:58.784603  491322 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:18:58.796726  491322 retry.go:31] will retry after 320.076254ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:18:58Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:18:59.117358  491322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:18:59.132923  491322 pause.go:52] kubelet running: false
	I1009 20:18:59.133058  491322 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:18:59.318499  491322 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:18:59.318615  491322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:18:59.385449  491322 cri.go:89] found id: "3d32dbce2cc613f987b4189a71d62e455a6390b309ef522069aa954c7269e07b"
	I1009 20:18:59.385516  491322 cri.go:89] found id: "d6b7ee85aeefababe2c083f6e0a8cd0dc31cd7c5844cb95bf3b217fc2272910f"
	I1009 20:18:59.385538  491322 cri.go:89] found id: "cfac8e5ac3da24e22eb9c6cef2647c4b3078ab69fc092c7b1a73d4bc627d2f52"
	I1009 20:18:59.385550  491322 cri.go:89] found id: "0442d50e4e3961eb21b5a12dda29ff9aea11f015d76a75f5fc6d85fbecaab975"
	I1009 20:18:59.385554  491322 cri.go:89] found id: "042d3009a6505b38db3a5645a55f6992d1b6ef9254086f64eef6f0621cff64c8"
	I1009 20:18:59.385558  491322 cri.go:89] found id: "22b87e577d7a8f108e7d77d095e44d5b3392e21fb7da8260fe838b3e930b2229"
	I1009 20:18:59.385561  491322 cri.go:89] found id: "5abd9717aed8a5baaa24ce4dbac3f6a6652f3d3b84cb43dc09007beee7a84423"
	I1009 20:18:59.385564  491322 cri.go:89] found id: "bdcbfecca01ea6e3e0ee392800df2ec67f04ed687955da27cce3925008d3bc5a"
	I1009 20:18:59.385567  491322 cri.go:89] found id: "d49e0cc690dcab668ca06327548e322b4d012301c7ad96444959726efbca4e09"
	I1009 20:18:59.385580  491322 cri.go:89] found id: "874aa1307bd23de55196930ba25ea04fa85d47795dbd099fb33715b82b0ca793"
	I1009 20:18:59.385583  491322 cri.go:89] found id: "fe7a54433b35090ac26deba9c9b4e3b51e7532d0b41a463ca4aa4968c8781c7f"
	I1009 20:18:59.385586  491322 cri.go:89] found id: ""
	I1009 20:18:59.385638  491322 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:18:59.399090  491322 out.go:203] 
	W1009 20:18:59.400250  491322 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:18:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:18:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 20:18:59.400293  491322 out.go:285] * 
	* 
	W1009 20:18:59.406039  491322 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:18:59.407228  491322 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-020313 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-020313
helpers_test.go:243: (dbg) docker inspect no-preload-020313:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861",
	        "Created": "2025-10-09T20:16:11.761091001Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 485746,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:17:52.1087081Z",
	            "FinishedAt": "2025-10-09T20:17:51.097082467Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861/hosts",
	        "LogPath": "/var/lib/docker/containers/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861-json.log",
	        "Name": "/no-preload-020313",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-020313:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-020313",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861",
	                "LowerDir": "/var/lib/docker/overlay2/89e13088c213ea195f3949972cfac4cf35790514b34c96e6ac7e173e96264c21-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89e13088c213ea195f3949972cfac4cf35790514b34c96e6ac7e173e96264c21/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89e13088c213ea195f3949972cfac4cf35790514b34c96e6ac7e173e96264c21/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89e13088c213ea195f3949972cfac4cf35790514b34c96e6ac7e173e96264c21/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-020313",
	                "Source": "/var/lib/docker/volumes/no-preload-020313/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-020313",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-020313",
	                "name.minikube.sigs.k8s.io": "no-preload-020313",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f8a9b7a1c41c18467eaecea135d4540a1086c426b7aa2c4bea4a0559b6a0a27",
	            "SandboxKey": "/var/run/docker/netns/8f8a9b7a1c41",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-020313": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:d7:ce:42:d0:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e50c4d176bfa3eef4ff1ee9bca0047e351ec3aec36a4229f03c93ea4e9e653dd",
	                    "EndpointID": "d069caa9e0dc0a399cacbdbeedbdb6e5d8d58aa404f272e52fcf815989963c6e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-020313",
	                        "5f4dc51ee851"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-020313 -n no-preload-020313
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-020313 -n no-preload-020313: exit status 2 (356.917306ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-020313 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-020313 logs -n 25: (1.547369218s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ start   │ -p cert-expiration-282540 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-282540   │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	│ delete  │ -p force-systemd-env-242564                                                                                                                                                                                                                   │ force-systemd-env-242564 │ jenkins │ v1.37.0 │ 09 Oct 25 20:14 UTC │ 09 Oct 25 20:14 UTC │
	│ start   │ -p cert-options-038875 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-038875      │ jenkins │ v1.37.0 │ 09 Oct 25 20:14 UTC │ 09 Oct 25 20:15 UTC │
	│ ssh     │ cert-options-038875 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-038875      │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ ssh     │ -p cert-options-038875 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-038875      │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ delete  │ -p cert-options-038875                                                                                                                                                                                                                        │ cert-options-038875      │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ start   │ -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p cert-expiration-282540 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-282540   │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:16 UTC │
	│ delete  │ -p cert-expiration-282540                                                                                                                                                                                                                     │ cert-expiration-282540   │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020313        │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:17 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-670649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │                     │
	│ stop    │ -p old-k8s-version-670649 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-670649 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-020313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020313        │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	│ stop    │ -p no-preload-020313 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-020313        │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ image   │ old-k8s-version-670649 image list --format=json                                                                                                                                                                                               │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ pause   │ -p old-k8s-version-670649 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-020313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-020313        │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ start   │ -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020313        │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:18 UTC │
	│ delete  │ -p old-k8s-version-670649                                                                                                                                                                                                                     │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ delete  │ -p old-k8s-version-670649                                                                                                                                                                                                                     │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:18 UTC │
	│ start   │ -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-565110       │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │                     │
	│ image   │ no-preload-020313 image list --format=json                                                                                                                                                                                                    │ no-preload-020313        │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:18 UTC │
	│ pause   │ -p no-preload-020313 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-020313        │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:18:00
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:18:00.590755  487957 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:18:00.591024  487957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:18:00.591052  487957 out.go:374] Setting ErrFile to fd 2...
	I1009 20:18:00.591103  487957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:18:00.600812  487957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:18:00.601547  487957 out.go:368] Setting JSON to false
	I1009 20:18:00.602578  487957 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10820,"bootTime":1760030261,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:18:00.602791  487957 start.go:143] virtualization:  
	I1009 20:18:00.609492  487957 out.go:179] * [embed-certs-565110] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:18:00.622425  487957 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:18:00.622792  487957 notify.go:221] Checking for updates...
	I1009 20:18:00.639698  487957 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:18:00.643468  487957 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:18:00.647174  487957 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:18:00.650799  487957 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:18:00.654297  487957 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:18:00.658339  487957 config.go:182] Loaded profile config "no-preload-020313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:18:00.658502  487957 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:18:00.710324  487957 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:18:00.710528  487957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:18:00.851868  487957 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 20:18:00.839130737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:18:00.852005  487957 docker.go:319] overlay module found
	I1009 20:18:00.855728  487957 out.go:179] * Using the docker driver based on user configuration
	I1009 20:18:00.857839  487957 start.go:309] selected driver: docker
	I1009 20:18:00.857861  487957 start.go:930] validating driver "docker" against <nil>
	I1009 20:18:00.857876  487957 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:18:00.858636  487957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:18:00.991203  487957 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 20:18:00.976990516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:18:00.991377  487957 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 20:18:00.991643  487957 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:18:00.995088  487957 out.go:179] * Using Docker driver with root privileges
	I1009 20:18:00.998145  487957 cni.go:84] Creating CNI manager for ""
	I1009 20:18:00.998224  487957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:18:00.998236  487957 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 20:18:00.998315  487957 start.go:353] cluster config:
	{Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:18:01.001794  487957 out.go:179] * Starting "embed-certs-565110" primary control-plane node in "embed-certs-565110" cluster
	I1009 20:18:01.004911  487957 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:18:01.008082  487957 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:18:01.010981  487957 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:18:01.011043  487957 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 20:18:01.011053  487957 cache.go:58] Caching tarball of preloaded images
	I1009 20:18:01.011102  487957 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:18:01.011406  487957 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 20:18:01.011420  487957 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 20:18:01.011532  487957 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/config.json ...
	I1009 20:18:01.011551  487957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/config.json: {Name:mk0c43fa37b9dbd5eccdb406ccdff1b49370e0a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:01.049100  487957 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:18:01.049142  487957 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:18:01.049156  487957 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:18:01.049180  487957 start.go:361] acquireMachinesLock for embed-certs-565110: {Name:mk32ec325145c7dbf708685a0b7d3c4450230c14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:18:01.049274  487957 start.go:365] duration metric: took 79.254µs to acquireMachinesLock for "embed-certs-565110"
	I1009 20:18:01.049300  487957 start.go:94] Provisioning new machine with config: &{Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:18:01.049378  487957 start.go:126] createHost starting for "" (driver="docker")
	I1009 20:17:59.144105  485563 cli_runner.go:164] Run: docker network inspect no-preload-020313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:17:59.172476  485563 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:59.176779  485563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:59.191417  485563 kubeadm.go:883] updating cluster {Name:no-preload-020313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:59.191536  485563 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:17:59.191579  485563 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:59.225318  485563 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:17:59.225339  485563 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:17:59.225347  485563 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1009 20:17:59.225437  485563 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-020313 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-020313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:17:59.225524  485563 ssh_runner.go:195] Run: crio config
	I1009 20:17:59.289760  485563 cni.go:84] Creating CNI manager for ""
	I1009 20:17:59.289786  485563 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:17:59.289808  485563 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 20:17:59.289838  485563 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-020313 NodeName:no-preload-020313 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:17:59.289964  485563 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-020313"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:17:59.290039  485563 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:17:59.304571  485563 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:17:59.304653  485563 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:17:59.313037  485563 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 20:17:59.328174  485563 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:17:59.343088  485563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1009 20:17:59.357695  485563 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:17:59.361905  485563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:59.373260  485563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:59.523133  485563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:17:59.541440  485563 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313 for IP: 192.168.85.2
	I1009 20:17:59.541462  485563 certs.go:195] generating shared ca certs ...
	I1009 20:17:59.541478  485563 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:59.541602  485563 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:17:59.541645  485563 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:17:59.541657  485563 certs.go:257] generating profile certs ...
	I1009 20:17:59.541756  485563 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.key
	I1009 20:17:59.541820  485563 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.key.ff7e88d0
	I1009 20:17:59.541865  485563 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/proxy-client.key
	I1009 20:17:59.541976  485563 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:17:59.542011  485563 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:17:59.542022  485563 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:17:59.542049  485563 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:17:59.542077  485563 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:17:59.542097  485563 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:17:59.542140  485563 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:17:59.542726  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:17:59.617731  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:17:59.643397  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:17:59.707301  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:17:59.761367  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 20:17:59.827017  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:17:59.887393  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:17:59.944913  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:17:59.998409  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:18:00.020574  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:18:00.043425  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:18:00.066864  485563 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:18:00.091166  485563 ssh_runner.go:195] Run: openssl version
	I1009 20:18:00.101199  485563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:18:00.129352  485563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:00.220247  485563 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:00.220335  485563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:00.328864  485563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:18:00.360583  485563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:18:00.376978  485563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:18:00.399128  485563 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:18:00.399228  485563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:18:00.506640  485563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:18:00.516429  485563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:18:00.606675  485563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:18:00.620322  485563 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:18:00.620389  485563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:18:00.688131  485563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:18:00.744809  485563 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:18:00.755206  485563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:18:00.854489  485563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:18:01.057704  485563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:18:01.175947  485563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:18:01.290460  485563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:18:01.366479  485563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:18:01.481489  485563 kubeadm.go:400] StartCluster: {Name:no-preload-020313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:18:01.481579  485563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:18:01.481649  485563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:18:01.605366  485563 cri.go:89] found id: "22b87e577d7a8f108e7d77d095e44d5b3392e21fb7da8260fe838b3e930b2229"
	I1009 20:18:01.605392  485563 cri.go:89] found id: "5abd9717aed8a5baaa24ce4dbac3f6a6652f3d3b84cb43dc09007beee7a84423"
	I1009 20:18:01.605398  485563 cri.go:89] found id: "bdcbfecca01ea6e3e0ee392800df2ec67f04ed687955da27cce3925008d3bc5a"
	I1009 20:18:01.605401  485563 cri.go:89] found id: "d49e0cc690dcab668ca06327548e322b4d012301c7ad96444959726efbca4e09"
	I1009 20:18:01.605404  485563 cri.go:89] found id: ""
	I1009 20:18:01.605457  485563 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 20:18:01.635114  485563 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:18:01Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:18:01.635196  485563 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:18:01.644565  485563 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 20:18:01.644582  485563 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 20:18:01.644643  485563 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:18:01.653658  485563 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:18:01.654048  485563 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-020313" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:18:01.654137  485563 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-294150/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-020313" cluster setting kubeconfig missing "no-preload-020313" context setting]
	I1009 20:18:01.654442  485563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:01.656308  485563 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:18:01.669005  485563 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1009 20:18:01.669038  485563 kubeadm.go:601] duration metric: took 24.449837ms to restartPrimaryControlPlane
	I1009 20:18:01.669047  485563 kubeadm.go:402] duration metric: took 187.567416ms to StartCluster
	I1009 20:18:01.669062  485563 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:01.669219  485563 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:18:01.669862  485563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:01.670072  485563 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:18:01.670465  485563 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:18:01.670539  485563 addons.go:69] Setting storage-provisioner=true in profile "no-preload-020313"
	I1009 20:18:01.670553  485563 addons.go:238] Setting addon storage-provisioner=true in "no-preload-020313"
	W1009 20:18:01.670559  485563 addons.go:247] addon storage-provisioner should already be in state true
	I1009 20:18:01.670579  485563 host.go:66] Checking if "no-preload-020313" exists ...
	I1009 20:18:01.671074  485563 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Status}}
	I1009 20:18:01.671682  485563 config.go:182] Loaded profile config "no-preload-020313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:18:01.671776  485563 addons.go:69] Setting dashboard=true in profile "no-preload-020313"
	I1009 20:18:01.671811  485563 addons.go:238] Setting addon dashboard=true in "no-preload-020313"
	W1009 20:18:01.671834  485563 addons.go:247] addon dashboard should already be in state true
	I1009 20:18:01.671886  485563 host.go:66] Checking if "no-preload-020313" exists ...
	I1009 20:18:01.672710  485563 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Status}}
	I1009 20:18:01.674029  485563 addons.go:69] Setting default-storageclass=true in profile "no-preload-020313"
	I1009 20:18:01.674063  485563 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-020313"
	I1009 20:18:01.674644  485563 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Status}}
	I1009 20:18:01.683962  485563 out.go:179] * Verifying Kubernetes components...
	I1009 20:18:01.690151  485563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:01.774646  485563 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:01.777516  485563 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:01.777538  485563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:18:01.777602  485563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:18:01.786044  485563 addons.go:238] Setting addon default-storageclass=true in "no-preload-020313"
	W1009 20:18:01.786069  485563 addons.go:247] addon default-storageclass should already be in state true
	I1009 20:18:01.786094  485563 host.go:66] Checking if "no-preload-020313" exists ...
	I1009 20:18:01.786508  485563 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Status}}
	I1009 20:18:01.787836  485563 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 20:18:01.792027  485563 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 20:18:01.053086  487957 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 20:18:01.053443  487957 start.go:160] libmachine.API.Create for "embed-certs-565110" (driver="docker")
	I1009 20:18:01.053522  487957 client.go:168] LocalClient.Create starting
	I1009 20:18:01.053664  487957 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem
	I1009 20:18:01.053737  487957 main.go:141] libmachine: Decoding PEM data...
	I1009 20:18:01.053774  487957 main.go:141] libmachine: Parsing certificate...
	I1009 20:18:01.053863  487957 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem
	I1009 20:18:01.053922  487957 main.go:141] libmachine: Decoding PEM data...
	I1009 20:18:01.053949  487957 main.go:141] libmachine: Parsing certificate...
	I1009 20:18:01.054442  487957 cli_runner.go:164] Run: docker network inspect embed-certs-565110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 20:18:01.081290  487957 cli_runner.go:211] docker network inspect embed-certs-565110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 20:18:01.081381  487957 network_create.go:284] running [docker network inspect embed-certs-565110] to gather additional debugging logs...
	I1009 20:18:01.081404  487957 cli_runner.go:164] Run: docker network inspect embed-certs-565110
	W1009 20:18:01.109258  487957 cli_runner.go:211] docker network inspect embed-certs-565110 returned with exit code 1
	I1009 20:18:01.109287  487957 network_create.go:287] error running [docker network inspect embed-certs-565110]: docker network inspect embed-certs-565110: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-565110 not found
	I1009 20:18:01.109301  487957 network_create.go:289] output of [docker network inspect embed-certs-565110]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-565110 not found
	
	** /stderr **
	I1009 20:18:01.109400  487957 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:18:01.142304  487957 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3847a6577684 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:b5:e6:7d:c7:ad} reservation:<nil>}
	I1009 20:18:01.142682  487957 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5742e12e0dad IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9e:82:91:fd:a6:fb} reservation:<nil>}
	I1009 20:18:01.142904  487957 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-11b099636187 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:bb:e5:1b:6d:a2} reservation:<nil>}
	I1009 20:18:01.143323  487957 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a32d0}
	I1009 20:18:01.143342  487957 network_create.go:124] attempt to create docker network embed-certs-565110 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1009 20:18:01.143400  487957 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-565110 embed-certs-565110
	I1009 20:18:01.236989  487957 network_create.go:108] docker network embed-certs-565110 192.168.76.0/24 created
	I1009 20:18:01.237032  487957 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-565110" container
	I1009 20:18:01.237312  487957 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 20:18:01.270871  487957 cli_runner.go:164] Run: docker volume create embed-certs-565110 --label name.minikube.sigs.k8s.io=embed-certs-565110 --label created_by.minikube.sigs.k8s.io=true
	I1009 20:18:01.299377  487957 oci.go:103] Successfully created a docker volume embed-certs-565110
	I1009 20:18:01.299478  487957 cli_runner.go:164] Run: docker run --rm --name embed-certs-565110-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-565110 --entrypoint /usr/bin/test -v embed-certs-565110:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 20:18:02.189014  487957 oci.go:107] Successfully prepared a docker volume embed-certs-565110
	I1009 20:18:02.189060  487957 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:18:02.189079  487957 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 20:18:02.189165  487957 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-565110:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 20:18:01.799264  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 20:18:01.799306  485563 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 20:18:01.799382  485563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:18:01.824878  485563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/no-preload-020313/id_rsa Username:docker}
	I1009 20:18:01.839295  485563 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:01.839319  485563 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:18:01.839380  485563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:18:01.868643  485563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/no-preload-020313/id_rsa Username:docker}
	I1009 20:18:01.885200  485563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/no-preload-020313/id_rsa Username:docker}
	I1009 20:18:02.128754  485563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:02.234903  485563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:02.344181  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 20:18:02.344203  485563 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 20:18:02.479008  485563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:02.487768  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 20:18:02.487793  485563 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 20:18:02.544681  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 20:18:02.544706  485563 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 20:18:02.643290  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 20:18:02.643315  485563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 20:18:02.746960  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 20:18:02.747004  485563 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 20:18:02.775323  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 20:18:02.775362  485563 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 20:18:02.807163  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 20:18:02.807193  485563 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 20:18:02.831381  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 20:18:02.831408  485563 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 20:18:02.859780  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 20:18:02.859808  485563 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 20:18:02.924500  485563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 20:18:10.031390  485563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.902603361s)
	I1009 20:18:10.031448  485563 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.796524649s)
	I1009 20:18:10.031491  485563 node_ready.go:35] waiting up to 6m0s for node "no-preload-020313" to be "Ready" ...
	I1009 20:18:10.031843  485563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.552754862s)
	I1009 20:18:10.032131  485563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.107584389s)
	I1009 20:18:10.037392  485563 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-020313 addons enable metrics-server
	
	I1009 20:18:10.082370  485563 node_ready.go:49] node "no-preload-020313" is "Ready"
	I1009 20:18:10.082403  485563 node_ready.go:38] duration metric: took 50.888327ms for node "no-preload-020313" to be "Ready" ...
	I1009 20:18:10.082418  485563 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:18:10.082478  485563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:10.093413  485563 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1009 20:18:08.074561  487957 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-565110:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (5.885358999s)
	I1009 20:18:08.074593  487957 kic.go:203] duration metric: took 5.885510985s to extract preloaded images to volume ...
	W1009 20:18:08.074741  487957 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 20:18:08.074851  487957 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 20:18:08.179323  487957 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-565110 --name embed-certs-565110 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-565110 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-565110 --network embed-certs-565110 --ip 192.168.76.2 --volume embed-certs-565110:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 20:18:08.617473  487957 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Running}}
	I1009 20:18:08.645855  487957 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:18:08.674614  487957 cli_runner.go:164] Run: docker exec embed-certs-565110 stat /var/lib/dpkg/alternatives/iptables
	I1009 20:18:08.759637  487957 oci.go:144] the created container "embed-certs-565110" has a running status.
	I1009 20:18:08.759672  487957 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa...
	I1009 20:18:09.057433  487957 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 20:18:09.084846  487957 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:18:09.112242  487957 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 20:18:09.112266  487957 kic_runner.go:114] Args: [docker exec --privileged embed-certs-565110 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 20:18:09.208922  487957 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:18:09.231987  487957 machine.go:93] provisionDockerMachine start ...
	I1009 20:18:09.232111  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:09.258909  487957 main.go:141] libmachine: Using SSH client type: native
	I1009 20:18:09.259265  487957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33436 <nil> <nil>}
	I1009 20:18:09.259281  487957 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:18:09.259927  487957 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 20:18:10.097293  485563 addons.go:514] duration metric: took 8.426811628s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1009 20:18:10.109605  485563 api_server.go:72] duration metric: took 8.439504688s to wait for apiserver process to appear ...
	I1009 20:18:10.109628  485563 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:18:10.109647  485563 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 20:18:10.144893  485563 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:18:10.144918  485563 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:18:10.610265  485563 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 20:18:10.618589  485563 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 20:18:10.619785  485563 api_server.go:141] control plane version: v1.34.1
	I1009 20:18:10.619814  485563 api_server.go:131] duration metric: took 510.179593ms to wait for apiserver health ...
	I1009 20:18:10.619825  485563 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:18:10.624021  485563 system_pods.go:59] 8 kube-system pods found
	I1009 20:18:10.624065  485563 system_pods.go:61] "coredns-66bc5c9577-h7jz6" [50ef033a-7db2-4326-a6d6-574c692f50ba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:18:10.624075  485563 system_pods.go:61] "etcd-no-preload-020313" [ffe41bc4-bdd7-4da8-9781-364de0d17db9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:18:10.624080  485563 system_pods.go:61] "kindnet-47kwl" [60a32ed3-a01b-47ee-9128-d0763b3502ee] Running
	I1009 20:18:10.624087  485563 system_pods.go:61] "kube-apiserver-no-preload-020313" [d8f0991e-2fdd-4635-b144-99bfccfc61c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:18:10.624097  485563 system_pods.go:61] "kube-controller-manager-no-preload-020313" [a14b0780-83e0-4076-9076-c673c69ee034] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:18:10.624105  485563 system_pods.go:61] "kube-proxy-cd5v6" [7843ebcc-c450-40f9-b0dd-6cb09dd70a81] Running
	I1009 20:18:10.624112  485563 system_pods.go:61] "kube-scheduler-no-preload-020313" [a3f3beaf-2476-4cc8-845c-e0230d0fb499] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:18:10.624121  485563 system_pods.go:61] "storage-provisioner" [03ca5595-692b-4e09-a599-439b385749c1] Running
	I1009 20:18:10.624127  485563 system_pods.go:74] duration metric: took 4.295863ms to wait for pod list to return data ...
	I1009 20:18:10.624152  485563 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:18:10.627061  485563 default_sa.go:45] found service account: "default"
	I1009 20:18:10.627089  485563 default_sa.go:55] duration metric: took 2.930232ms for default service account to be created ...
	I1009 20:18:10.627099  485563 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:18:10.630551  485563 system_pods.go:86] 8 kube-system pods found
	I1009 20:18:10.630635  485563 system_pods.go:89] "coredns-66bc5c9577-h7jz6" [50ef033a-7db2-4326-a6d6-574c692f50ba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:18:10.630657  485563 system_pods.go:89] "etcd-no-preload-020313" [ffe41bc4-bdd7-4da8-9781-364de0d17db9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:18:10.630664  485563 system_pods.go:89] "kindnet-47kwl" [60a32ed3-a01b-47ee-9128-d0763b3502ee] Running
	I1009 20:18:10.630671  485563 system_pods.go:89] "kube-apiserver-no-preload-020313" [d8f0991e-2fdd-4635-b144-99bfccfc61c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:18:10.630680  485563 system_pods.go:89] "kube-controller-manager-no-preload-020313" [a14b0780-83e0-4076-9076-c673c69ee034] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:18:10.630709  485563 system_pods.go:89] "kube-proxy-cd5v6" [7843ebcc-c450-40f9-b0dd-6cb09dd70a81] Running
	I1009 20:18:10.630731  485563 system_pods.go:89] "kube-scheduler-no-preload-020313" [a3f3beaf-2476-4cc8-845c-e0230d0fb499] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:18:10.630743  485563 system_pods.go:89] "storage-provisioner" [03ca5595-692b-4e09-a599-439b385749c1] Running
	I1009 20:18:10.630750  485563 system_pods.go:126] duration metric: took 3.645302ms to wait for k8s-apps to be running ...
	I1009 20:18:10.630762  485563 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:18:10.630834  485563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:18:10.646461  485563 system_svc.go:56] duration metric: took 15.690099ms WaitForService to wait for kubelet
	I1009 20:18:10.646490  485563 kubeadm.go:586] duration metric: took 8.976395522s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:18:10.646510  485563 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:18:10.649916  485563 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:18:10.649949  485563 node_conditions.go:123] node cpu capacity is 2
	I1009 20:18:10.649964  485563 node_conditions.go:105] duration metric: took 3.448869ms to run NodePressure ...
	I1009 20:18:10.649977  485563 start.go:242] waiting for startup goroutines ...
	I1009 20:18:10.649985  485563 start.go:247] waiting for cluster config update ...
	I1009 20:18:10.650001  485563 start.go:256] writing updated cluster config ...
	I1009 20:18:10.650295  485563 ssh_runner.go:195] Run: rm -f paused
	I1009 20:18:10.654360  485563 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:18:10.658044  485563 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h7jz6" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:12.408986  487957 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-565110
	
	I1009 20:18:12.409068  487957 ubuntu.go:182] provisioning hostname "embed-certs-565110"
	I1009 20:18:12.409162  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:12.443590  487957 main.go:141] libmachine: Using SSH client type: native
	I1009 20:18:12.443907  487957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33436 <nil> <nil>}
	I1009 20:18:12.443926  487957 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-565110 && echo "embed-certs-565110" | sudo tee /etc/hostname
	I1009 20:18:12.614528  487957 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-565110
	
	I1009 20:18:12.614655  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:12.631542  487957 main.go:141] libmachine: Using SSH client type: native
	I1009 20:18:12.631869  487957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33436 <nil> <nil>}
	I1009 20:18:12.631892  487957 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-565110' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-565110/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-565110' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:18:12.777423  487957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:18:12.777514  487957 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:18:12.777567  487957 ubuntu.go:190] setting up certificates
	I1009 20:18:12.777603  487957 provision.go:84] configureAuth start
	I1009 20:18:12.777708  487957 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-565110
	I1009 20:18:12.795597  487957 provision.go:143] copyHostCerts
	I1009 20:18:12.795662  487957 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:18:12.795672  487957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:18:12.795748  487957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:18:12.795855  487957 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:18:12.795860  487957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:18:12.795888  487957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:18:12.795945  487957 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:18:12.795950  487957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:18:12.795974  487957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:18:12.796026  487957 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.embed-certs-565110 san=[127.0.0.1 192.168.76.2 embed-certs-565110 localhost minikube]
	I1009 20:18:13.060338  487957 provision.go:177] copyRemoteCerts
	I1009 20:18:13.060419  487957 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:18:13.060466  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:13.077461  487957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:18:13.180557  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:18:13.198137  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 20:18:13.215683  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:18:13.232691  487957 provision.go:87] duration metric: took 455.050935ms to configureAuth
	I1009 20:18:13.232720  487957 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:18:13.232915  487957 config.go:182] Loaded profile config "embed-certs-565110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:18:13.233030  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:13.249905  487957 main.go:141] libmachine: Using SSH client type: native
	I1009 20:18:13.250220  487957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33436 <nil> <nil>}
	I1009 20:18:13.250240  487957 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:18:13.596481  487957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:18:13.596506  487957 machine.go:96] duration metric: took 4.364493205s to provisionDockerMachine
	I1009 20:18:13.596516  487957 client.go:171] duration metric: took 12.542973622s to LocalClient.Create
	I1009 20:18:13.596531  487957 start.go:168] duration metric: took 12.543089792s to libmachine.API.Create "embed-certs-565110"
	I1009 20:18:13.596538  487957 start.go:294] postStartSetup for "embed-certs-565110" (driver="docker")
	I1009 20:18:13.596549  487957 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:18:13.596615  487957 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:18:13.596676  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:13.617787  487957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:18:13.725765  487957 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:18:13.729132  487957 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:18:13.729161  487957 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:18:13.729172  487957 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:18:13.729226  487957 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:18:13.729312  487957 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:18:13.729413  487957 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:18:13.737017  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:18:13.756735  487957 start.go:297] duration metric: took 160.181177ms for postStartSetup
	I1009 20:18:13.757184  487957 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-565110
	I1009 20:18:13.773952  487957 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/config.json ...
	I1009 20:18:13.774246  487957 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:18:13.774302  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:13.790882  487957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:18:13.890407  487957 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:18:13.895410  487957 start.go:129] duration metric: took 12.846015538s to createHost
	I1009 20:18:13.895435  487957 start.go:84] releasing machines lock for "embed-certs-565110", held for 12.84615204s
	I1009 20:18:13.895507  487957 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-565110
	I1009 20:18:13.912673  487957 ssh_runner.go:195] Run: cat /version.json
	I1009 20:18:13.912711  487957 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:18:13.912724  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:13.912764  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:13.934714  487957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:18:13.935171  487957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:18:14.037180  487957 ssh_runner.go:195] Run: systemctl --version
	I1009 20:18:14.130910  487957 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:18:14.174802  487957 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:18:14.179297  487957 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:18:14.179375  487957 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:18:14.222407  487957 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 20:18:14.222440  487957 start.go:496] detecting cgroup driver to use...
	I1009 20:18:14.222471  487957 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 20:18:14.222523  487957 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:18:14.244633  487957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:18:14.262447  487957 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:18:14.262509  487957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:18:14.284384  487957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:18:14.306165  487957 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:18:14.434469  487957 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:18:14.610892  487957 docker.go:234] disabling docker service ...
	I1009 20:18:14.611108  487957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:18:14.639331  487957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:18:14.673968  487957 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:18:14.846807  487957 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:18:15.038156  487957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:18:15.065809  487957 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:18:15.087948  487957 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:18:15.088021  487957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:18:15.103023  487957 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:18:15.103096  487957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:18:15.115108  487957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:18:15.128438  487957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:18:15.140537  487957 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:18:15.154873  487957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:18:15.174178  487957 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:18:15.188211  487957 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:18:15.198267  487957 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:18:15.206352  487957 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:18:15.214468  487957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:15.382207  487957 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:18:15.585151  487957 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:18:15.585283  487957 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:18:15.591761  487957 start.go:564] Will wait 60s for crictl version
	I1009 20:18:15.591887  487957 ssh_runner.go:195] Run: which crictl
	I1009 20:18:15.600356  487957 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:18:15.635269  487957 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:18:15.635366  487957 ssh_runner.go:195] Run: crio --version
	I1009 20:18:15.687811  487957 ssh_runner.go:195] Run: crio --version
	I1009 20:18:15.729423  487957 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1009 20:18:12.663946  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	W1009 20:18:14.664367  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	W1009 20:18:16.704606  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	I1009 20:18:15.732452  487957 cli_runner.go:164] Run: docker network inspect embed-certs-565110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:18:15.749476  487957 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 20:18:15.754003  487957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:18:15.767169  487957 kubeadm.go:883] updating cluster {Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:18:15.767299  487957 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:18:15.767358  487957 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:18:15.822609  487957 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:18:15.822629  487957 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:18:15.822684  487957 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:18:15.859594  487957 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:18:15.859620  487957 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:18:15.859628  487957 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 20:18:15.859721  487957 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-565110 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:18:15.859808  487957 ssh_runner.go:195] Run: crio config
	I1009 20:18:15.969678  487957 cni.go:84] Creating CNI manager for ""
	I1009 20:18:15.969751  487957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:18:15.969781  487957 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 20:18:15.969838  487957 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-565110 NodeName:embed-certs-565110 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:18:15.970029  487957 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-565110"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:18:15.970149  487957 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:18:15.981678  487957 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:18:15.981829  487957 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:18:15.990181  487957 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1009 20:18:16.005845  487957 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:18:16.022617  487957 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1009 20:18:16.037942  487957 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:18:16.042338  487957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:18:16.053143  487957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:16.220442  487957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:16.247946  487957 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110 for IP: 192.168.76.2
	I1009 20:18:16.248015  487957 certs.go:195] generating shared ca certs ...
	I1009 20:18:16.248046  487957 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:16.248264  487957 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:18:16.248353  487957 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:18:16.248383  487957 certs.go:257] generating profile certs ...
	I1009 20:18:16.248489  487957 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/client.key
	I1009 20:18:16.248533  487957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/client.crt with IP's: []
	I1009 20:18:17.725333  487957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/client.crt ...
	I1009 20:18:17.725418  487957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/client.crt: {Name:mk27e25d4844f4e5256972d00578c76ca030ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:17.725622  487957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/client.key ...
	I1009 20:18:17.725657  487957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/client.key: {Name:mk5baef3c8c7f4cffd2455d6251cc2bf43177213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:17.725803  487957 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key.e7b9ab9d
	I1009 20:18:17.725842  487957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.crt.e7b9ab9d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1009 20:18:18.302153  487957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.crt.e7b9ab9d ...
	I1009 20:18:18.302230  487957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.crt.e7b9ab9d: {Name:mkd9ba51107f17a2c6354d627d3a1138b49b247d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:18.302459  487957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key.e7b9ab9d ...
	I1009 20:18:18.302497  487957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key.e7b9ab9d: {Name:mk616b2f887fc7903bd26550f92199671bbd9e18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:18.302638  487957 certs.go:382] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.crt.e7b9ab9d -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.crt
	I1009 20:18:18.302762  487957 certs.go:386] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key.e7b9ab9d -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key
	I1009 20:18:18.302852  487957 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.key
	I1009 20:18:18.302901  487957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.crt with IP's: []
	I1009 20:18:18.432079  487957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.crt ...
	I1009 20:18:18.432167  487957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.crt: {Name:mkc1d0ca280e0c2bbae28c8147a5d7e32b0c826c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:18.432394  487957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.key ...
	I1009 20:18:18.432432  487957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.key: {Name:mk1d8c2aaad68b1b10dec12dbce954706b896254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:18.432682  487957 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:18:18.432751  487957 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:18:18.432779  487957 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:18:18.432845  487957 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:18:18.432916  487957 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:18:18.432963  487957 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:18:18.433051  487957 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:18:18.433726  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:18:18.453982  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:18:18.477046  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:18:18.498678  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:18:18.522773  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1009 20:18:18.551727  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:18:18.578927  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:18:18.597242  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:18:18.615529  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:18:18.633720  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:18:18.651679  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:18:18.676459  487957 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:18:18.698724  487957 ssh_runner.go:195] Run: openssl version
	I1009 20:18:18.709691  487957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:18:18.730884  487957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:18:18.738917  487957 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:18:18.739000  487957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:18:18.815581  487957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:18:18.825395  487957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:18:18.834887  487957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:18.839799  487957 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:18.839923  487957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:18.883334  487957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:18:18.892460  487957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:18:18.901344  487957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:18:18.906383  487957 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:18:18.906511  487957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:18:18.954801  487957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:18:18.965100  487957 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:18:18.969852  487957 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 20:18:18.969967  487957 kubeadm.go:400] StartCluster: {Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:18:18.970127  487957 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:18:18.970227  487957 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:18:19.005021  487957 cri.go:89] found id: ""
	I1009 20:18:19.005211  487957 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:18:19.017617  487957 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:18:19.026204  487957 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 20:18:19.026321  487957 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:18:19.037313  487957 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:18:19.037383  487957 kubeadm.go:157] found existing configuration files:
	
	I1009 20:18:19.037471  487957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:18:19.046246  487957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:18:19.046372  487957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:18:19.054169  487957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:18:19.063915  487957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:18:19.064026  487957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:18:19.071767  487957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:18:19.080891  487957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:18:19.081005  487957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:18:19.088826  487957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:18:19.097863  487957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:18:19.097977  487957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:18:19.106378  487957 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 20:18:19.161756  487957 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 20:18:19.162271  487957 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 20:18:19.190365  487957 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 20:18:19.190524  487957 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 20:18:19.190597  487957 kubeadm.go:318] OS: Linux
	I1009 20:18:19.190683  487957 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 20:18:19.190768  487957 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 20:18:19.190850  487957 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 20:18:19.190937  487957 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 20:18:19.191019  487957 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 20:18:19.191106  487957 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 20:18:19.191185  487957 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 20:18:19.191273  487957 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 20:18:19.191352  487957 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 20:18:19.301684  487957 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:18:19.301953  487957 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:18:19.302161  487957 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:18:19.321543  487957 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:18:19.330626  487957 out.go:252]   - Generating certificates and keys ...
	I1009 20:18:19.330794  487957 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 20:18:19.330917  487957 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 20:18:19.682795  487957 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 20:18:19.824373  487957 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 20:18:20.062756  487957 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 20:18:20.176731  487957 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 20:18:20.524693  487957 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 20:18:20.526513  487957 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-565110 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1009 20:18:19.163254  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	W1009 20:18:21.165465  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	I1009 20:18:20.906239  487957 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 20:18:20.906932  487957 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-565110 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 20:18:21.173505  487957 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 20:18:21.833685  487957 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 20:18:22.864146  487957 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 20:18:22.864381  487957 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:18:22.996838  487957 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:18:23.380688  487957 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:18:24.382333  487957 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:18:24.718717  487957 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:18:24.976592  487957 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:18:24.977624  487957 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:18:24.980606  487957 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:18:24.985473  487957 out.go:252]   - Booting up control plane ...
	I1009 20:18:24.985583  487957 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:18:24.985664  487957 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:18:24.991659  487957 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:18:25.015936  487957 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:18:25.016067  487957 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 20:18:25.029627  487957 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 20:18:25.029734  487957 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:18:25.029776  487957 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 20:18:25.206386  487957 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:18:25.206513  487957 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1009 20:18:23.167466  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	W1009 20:18:25.667437  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	I1009 20:18:26.208203  487957 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001861031s
	I1009 20:18:26.212100  487957 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 20:18:26.212201  487957 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1009 20:18:26.212295  487957 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 20:18:26.212377  487957 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1009 20:18:27.667559  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	W1009 20:18:30.163546  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	I1009 20:18:31.271882  487957 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.058663339s
	I1009 20:18:32.929952  487957 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.717849361s
	I1009 20:18:34.713557  487957 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.501415519s
	I1009 20:18:34.733451  487957 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 20:18:34.750579  487957 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 20:18:34.763899  487957 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 20:18:34.764118  487957 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-565110 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 20:18:34.775449  487957 kubeadm.go:318] [bootstrap-token] Using token: 2scf2u.5od2xm2wg3arr93y
	I1009 20:18:34.776787  487957 out.go:252]   - Configuring RBAC rules ...
	I1009 20:18:34.776935  487957 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 20:18:34.782520  487957 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 20:18:34.790071  487957 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 20:18:34.795055  487957 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 20:18:34.802262  487957 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 20:18:34.808941  487957 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 20:18:35.121621  487957 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 20:18:35.557009  487957 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 20:18:36.121457  487957 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 20:18:36.122695  487957 kubeadm.go:318] 
	I1009 20:18:36.122777  487957 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 20:18:36.122788  487957 kubeadm.go:318] 
	I1009 20:18:36.122870  487957 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 20:18:36.122879  487957 kubeadm.go:318] 
	I1009 20:18:36.122906  487957 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 20:18:36.122972  487957 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 20:18:36.123030  487957 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 20:18:36.123040  487957 kubeadm.go:318] 
	I1009 20:18:36.123097  487957 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 20:18:36.123105  487957 kubeadm.go:318] 
	I1009 20:18:36.123191  487957 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 20:18:36.123209  487957 kubeadm.go:318] 
	I1009 20:18:36.123285  487957 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 20:18:36.123376  487957 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 20:18:36.123451  487957 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 20:18:36.123457  487957 kubeadm.go:318] 
	I1009 20:18:36.123552  487957 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 20:18:36.123633  487957 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 20:18:36.123638  487957 kubeadm.go:318] 
	I1009 20:18:36.123730  487957 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 2scf2u.5od2xm2wg3arr93y \
	I1009 20:18:36.123839  487957 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e766d16640f098061f552dd476e80ebd3809bd57b4957045222f32c55d34903e \
	I1009 20:18:36.123861  487957 kubeadm.go:318] 	--control-plane 
	I1009 20:18:36.123865  487957 kubeadm.go:318] 
	I1009 20:18:36.123954  487957 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 20:18:36.123959  487957 kubeadm.go:318] 
	I1009 20:18:36.124045  487957 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 2scf2u.5od2xm2wg3arr93y \
	I1009 20:18:36.124152  487957 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e766d16640f098061f552dd476e80ebd3809bd57b4957045222f32c55d34903e 
	I1009 20:18:36.128166  487957 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 20:18:36.128433  487957 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 20:18:36.128556  487957 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:18:36.128619  487957 cni.go:84] Creating CNI manager for ""
	I1009 20:18:36.128635  487957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:18:36.130939  487957 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1009 20:18:32.165289  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	W1009 20:18:34.665519  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	I1009 20:18:36.132180  487957 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 20:18:36.138224  487957 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 20:18:36.138249  487957 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 20:18:36.160571  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 20:18:36.604852  487957 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:18:36.605000  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:36.605072  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-565110 minikube.k8s.io/updated_at=2025_10_09T20_18_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb minikube.k8s.io/name=embed-certs-565110 minikube.k8s.io/primary=true
	I1009 20:18:36.829003  487957 ops.go:34] apiserver oom_adj: -16
	I1009 20:18:36.829202  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:37.330088  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:37.829429  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:38.330001  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:38.829263  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:39.329670  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:39.830065  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:40.330273  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:40.830174  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:40.965309  487957 kubeadm.go:1113] duration metric: took 4.360356736s to wait for elevateKubeSystemPrivileges
	I1009 20:18:40.965343  487957 kubeadm.go:402] duration metric: took 21.99538278s to StartCluster
	I1009 20:18:40.965362  487957 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:40.965441  487957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:18:40.966758  487957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:40.966988  487957 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:18:40.967078  487957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 20:18:40.967495  487957 config.go:182] Loaded profile config "embed-certs-565110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:18:40.967553  487957 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:18:40.967622  487957 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-565110"
	I1009 20:18:40.967637  487957 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-565110"
	I1009 20:18:40.967665  487957 host.go:66] Checking if "embed-certs-565110" exists ...
	I1009 20:18:40.968251  487957 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:18:40.968715  487957 addons.go:69] Setting default-storageclass=true in profile "embed-certs-565110"
	I1009 20:18:40.968736  487957 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-565110"
	I1009 20:18:40.969034  487957 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:18:40.972447  487957 out.go:179] * Verifying Kubernetes components...
	I1009 20:18:40.974031  487957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:41.031874  487957 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1009 20:18:37.163362  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	W1009 20:18:39.164606  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	W1009 20:18:41.663578  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	I1009 20:18:41.033072  487957 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:41.033092  487957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:18:41.033176  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:41.040795  487957 addons.go:238] Setting addon default-storageclass=true in "embed-certs-565110"
	I1009 20:18:41.042855  487957 host.go:66] Checking if "embed-certs-565110" exists ...
	I1009 20:18:41.043324  487957 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:18:41.076337  487957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:18:41.101840  487957 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:41.101860  487957 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:18:41.101921  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:41.136315  487957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:18:41.464439  487957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:41.464668  487957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 20:18:41.481534  487957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:41.509167  487957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:42.036935  487957 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1009 20:18:42.039336  487957 node_ready.go:35] waiting up to 6m0s for node "embed-certs-565110" to be "Ready" ...
	I1009 20:18:42.357504  487957 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1009 20:18:43.670316  485563 pod_ready.go:94] pod "coredns-66bc5c9577-h7jz6" is "Ready"
	I1009 20:18:43.670341  485563 pod_ready.go:86] duration metric: took 33.012270711s for pod "coredns-66bc5c9577-h7jz6" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:43.673038  485563 pod_ready.go:83] waiting for pod "etcd-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:43.677784  485563 pod_ready.go:94] pod "etcd-no-preload-020313" is "Ready"
	I1009 20:18:43.677811  485563 pod_ready.go:86] duration metric: took 4.744738ms for pod "etcd-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:43.680394  485563 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:43.685734  485563 pod_ready.go:94] pod "kube-apiserver-no-preload-020313" is "Ready"
	I1009 20:18:43.685765  485563 pod_ready.go:86] duration metric: took 5.342851ms for pod "kube-apiserver-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:43.688292  485563 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:43.862366  485563 pod_ready.go:94] pod "kube-controller-manager-no-preload-020313" is "Ready"
	I1009 20:18:43.862391  485563 pod_ready.go:86] duration metric: took 174.071318ms for pod "kube-controller-manager-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:44.062443  485563 pod_ready.go:83] waiting for pod "kube-proxy-cd5v6" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:44.462266  485563 pod_ready.go:94] pod "kube-proxy-cd5v6" is "Ready"
	I1009 20:18:44.462301  485563 pod_ready.go:86] duration metric: took 399.823993ms for pod "kube-proxy-cd5v6" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:44.666593  485563 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:45.066392  485563 pod_ready.go:94] pod "kube-scheduler-no-preload-020313" is "Ready"
	I1009 20:18:45.066425  485563 pod_ready.go:86] duration metric: took 399.801486ms for pod "kube-scheduler-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:45.066441  485563 pod_ready.go:40] duration metric: took 34.412045642s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:18:45.206037  485563 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:18:45.207816  485563 out.go:179] * Done! kubectl is now configured to use "no-preload-020313" cluster and "default" namespace by default
	I1009 20:18:42.359111  487957 addons.go:514] duration metric: took 1.391539274s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1009 20:18:42.541565  487957 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-565110" context rescaled to 1 replicas
	W1009 20:18:44.042503  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	W1009 20:18:46.542625  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	W1009 20:18:49.042996  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	W1009 20:18:51.542630  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	W1009 20:18:54.042948  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.265965617Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=278dd3de-934b-4f0c-a59b-00eae0cb7467 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.26916672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.273596533Z" level=info msg="Removed container 0afd5a0006249b39174c994b4216b2b676030afc2da532d4d2ccbb7240b6bcf7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtq85/dashboard-metrics-scraper" id=83e2b269-5f19-48c3-ba7e-928832c2d801 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.278774918Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.278957017Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9f395e64ed66d6e2d11c1915a5efb55d9226364d810c88098bde381ca110af5a/merged/etc/passwd: no such file or directory"
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.278977858Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9f395e64ed66d6e2d11c1915a5efb55d9226364d810c88098bde381ca110af5a/merged/etc/group: no such file or directory"
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.279224696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.294515837Z" level=info msg="Created container 3d32dbce2cc613f987b4189a71d62e455a6390b309ef522069aa954c7269e07b: kube-system/storage-provisioner/storage-provisioner" id=278dd3de-934b-4f0c-a59b-00eae0cb7467 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.295378642Z" level=info msg="Starting container: 3d32dbce2cc613f987b4189a71d62e455a6390b309ef522069aa954c7269e07b" id=424fb86c-9d47-41c3-89de-e156049716dd name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.297867161Z" level=info msg="Started container" PID=1626 containerID=3d32dbce2cc613f987b4189a71d62e455a6390b309ef522069aa954c7269e07b description=kube-system/storage-provisioner/storage-provisioner id=424fb86c-9d47-41c3-89de-e156049716dd name=/runtime.v1.RuntimeService/StartContainer sandboxID=5a73b1409a90f548928d13b2b2697b3cc601605b508a4af6d1ac3ad1055bea9c
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.917633485Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.925090145Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.925307264Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.925390489Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.928997037Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.929198632Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.929236114Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.932465376Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.932499806Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.932530395Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.935736789Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.935772285Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.93579695Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.939332728Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.93936989Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	3d32dbce2cc61       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           21 seconds ago       Running             storage-provisioner         2                   5a73b1409a90f       storage-provisioner                          kube-system
	874aa1307bd23       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   6cf35c67c122a       dashboard-metrics-scraper-6ffb444bf9-dtq85   kubernetes-dashboard
	fe7a54433b350       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   32 seconds ago       Running             kubernetes-dashboard        0                   ac87e7f1e7b02       kubernetes-dashboard-855c9754f9-46jtk        kubernetes-dashboard
	d6b7ee85aeefa       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago       Running             coredns                     1                   6d07cf22449bc       coredns-66bc5c9577-h7jz6                     kube-system
	30db86e88976f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago       Running             busybox                     1                   b7748c0a4538a       busybox                                      default
	cfac8e5ac3da2       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           52 seconds ago       Exited              storage-provisioner         1                   5a73b1409a90f       storage-provisioner                          kube-system
	0442d50e4e396       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago       Running             kube-proxy                  1                   59c2a8e5c6b1d       kube-proxy-cd5v6                             kube-system
	042d3009a6505       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago       Running             kindnet-cni                 1                   d080dbf8da035       kindnet-47kwl                                kube-system
	22b87e577d7a8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   5be5222eb7463       kube-controller-manager-no-preload-020313    kube-system
	5abd9717aed8a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   561cfcce190ce       etcd-no-preload-020313                       kube-system
	bdcbfecca01ea       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   6edc536f3c664       kube-scheduler-no-preload-020313             kube-system
	d49e0cc690dca       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   9cdbf2be3627c       kube-apiserver-no-preload-020313             kube-system
	
	
	==> coredns [d6b7ee85aeefababe2c083f6e0a8cd0dc31cd7c5844cb95bf3b217fc2272910f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46041 - 48469 "HINFO IN 630548794168358172.7858796566592122350. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016536469s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-020313
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-020313
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=no-preload-020313
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T20_17_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 20:16:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-020313
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 20:18:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 20:18:48 +0000   Thu, 09 Oct 2025 20:16:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 20:18:48 +0000   Thu, 09 Oct 2025 20:16:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 20:18:48 +0000   Thu, 09 Oct 2025 20:16:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 20:18:48 +0000   Thu, 09 Oct 2025 20:17:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-020313
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 4a34688d9b034718ad38693aacdec85a
	  System UUID:                a3d84e5d-68ba-4d89-bdca-3ce490a9cb49
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-h7jz6                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-no-preload-020313                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-47kwl                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-020313              250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-020313     200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-cd5v6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-020313              100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dtq85    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-46jtk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 111s                   kube-proxy       
	  Normal   Starting                 50s                    kube-proxy       
	  Normal   Starting                 2m11s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m11s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node no-preload-020313 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node no-preload-020313 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node no-preload-020313 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    118s                   kubelet          Node no-preload-020313 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 118s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  118s                   kubelet          Node no-preload-020313 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     118s                   kubelet          Node no-preload-020313 status is now: NodeHasSufficientPID
	  Normal   Starting                 118s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           113s                   node-controller  Node no-preload-020313 event: Registered Node no-preload-020313 in Controller
	  Normal   NodeReady                97s                    kubelet          Node no-preload-020313 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 61s)      kubelet          Node no-preload-020313 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 61s)      kubelet          Node no-preload-020313 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 61s)      kubelet          Node no-preload-020313 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                    node-controller  Node no-preload-020313 event: Registered Node no-preload-020313 in Controller
	
	
	==> dmesg <==
	[Oct 9 19:47] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:50] overlayfs: idmapped layers are currently not supported
	[ +27.967875] overlayfs: idmapped layers are currently not supported
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:04] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:15] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:16] overlayfs: idmapped layers are currently not supported
	[ +23.810739] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:18] overlayfs: idmapped layers are currently not supported
	[ +26.082927] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5abd9717aed8a5baaa24ce4dbac3f6a6652f3d3b84cb43dc09007beee7a84423] <==
	{"level":"warn","ts":"2025-10-09T20:18:04.895268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:04.933903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.011900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.047277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.072152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.133766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.180070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.215590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.255470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.314593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.365227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.400526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.497021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.625171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.708324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.775451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.841992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.887395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.953958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.992186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:06.060651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:06.108439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:06.130302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:06.140455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:06.267014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52378","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:19:01 up  3:01,  0 user,  load average: 4.30, 2.55, 1.92
	Linux no-preload-020313 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [042d3009a6505b38db3a5645a55f6992d1b6ef9254086f64eef6f0621cff64c8] <==
	I1009 20:18:08.557660       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:18:08.557875       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 20:18:08.558028       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:18:08.558042       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:18:08.558057       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:18:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:18:08.916104       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:18:08.920271       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:18:08.920300       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:18:08.920868       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 20:18:38.930697       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 20:18:38.930697       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1009 20:18:38.930795       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 20:18:38.930871       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1009 20:18:40.620514       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 20:18:40.620546       1 metrics.go:72] Registering metrics
	I1009 20:18:40.620631       1 controller.go:711] "Syncing nftables rules"
	I1009 20:18:48.916600       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 20:18:48.916736       1 main.go:301] handling current node
	I1009 20:18:58.924621       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 20:18:58.924661       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d49e0cc690dcab668ca06327548e322b4d012301c7ad96444959726efbca4e09] <==
	I1009 20:18:07.524509       1 policy_source.go:240] refreshing policies
	I1009 20:18:07.546369       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 20:18:07.561189       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1009 20:18:07.565049       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 20:18:07.573358       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1009 20:18:07.573414       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 20:18:07.601252       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 20:18:07.601332       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 20:18:07.607109       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1009 20:18:07.607316       1 aggregator.go:171] initial CRD sync complete...
	I1009 20:18:07.607328       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 20:18:07.607335       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 20:18:07.607340       1 cache.go:39] Caches are synced for autoregister controller
	I1009 20:18:07.613048       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 20:18:07.879882       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 20:18:08.245542       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:18:08.908702       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 20:18:09.432194       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 20:18:09.554545       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:18:09.631975       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:18:09.922350       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.69.145"}
	I1009 20:18:09.982279       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.89.231"}
	I1009 20:18:12.183157       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 20:18:12.285150       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 20:18:12.404932       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [22b87e577d7a8f108e7d77d095e44d5b3392e21fb7da8260fe838b3e930b2229] <==
	I1009 20:18:11.801844       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 20:18:11.805150       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1009 20:18:11.809027       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:18:11.809053       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 20:18:11.809062       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 20:18:11.814149       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 20:18:11.815405       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1009 20:18:11.818678       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1009 20:18:11.820961       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 20:18:11.823951       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:18:11.824120       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 20:18:11.827370       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 20:18:11.827492       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 20:18:11.831200       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1009 20:18:11.834679       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1009 20:18:11.835936       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 20:18:11.839129       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1009 20:18:11.839247       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 20:18:11.839367       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-020313"
	I1009 20:18:11.839427       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1009 20:18:11.840645       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 20:18:11.846071       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 20:18:11.851607       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 20:18:12.431989       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1009 20:18:12.433733       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [0442d50e4e3961eb21b5a12dda29ff9aea11f015d76a75f5fc6d85fbecaab975] <==
	I1009 20:18:09.833233       1 server_linux.go:53] "Using iptables proxy"
	I1009 20:18:10.209573       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 20:18:10.310689       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 20:18:10.310720       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 20:18:10.310787       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:18:10.348382       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:18:10.348712       1 server_linux.go:132] "Using iptables Proxier"
	I1009 20:18:10.353726       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:18:10.354143       1 server.go:527] "Version info" version="v1.34.1"
	I1009 20:18:10.354337       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:18:10.355998       1 config.go:200] "Starting service config controller"
	I1009 20:18:10.356063       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 20:18:10.356116       1 config.go:106] "Starting endpoint slice config controller"
	I1009 20:18:10.356145       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 20:18:10.356181       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 20:18:10.356207       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 20:18:10.356948       1 config.go:309] "Starting node config controller"
	I1009 20:18:10.357011       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 20:18:10.357041       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 20:18:10.457544       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 20:18:10.459152       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 20:18:10.459192       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bdcbfecca01ea6e3e0ee392800df2ec67f04ed687955da27cce3925008d3bc5a] <==
	I1009 20:18:05.045408       1 serving.go:386] Generated self-signed cert in-memory
	I1009 20:18:07.898243       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 20:18:07.902923       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:18:07.979508       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 20:18:07.979793       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 20:18:07.979861       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 20:18:07.979910       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 20:18:07.987621       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:18:07.987722       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:18:07.987771       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:18:07.990173       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:18:08.080523       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 20:18:08.090088       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:18:08.090416       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 09 20:18:13 no-preload-020313 kubelet[774]: E1009 20:18:13.550055     774 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ffc02df7-a011-4dff-a92d-b4705e05953c-kube-api-access-p2jkz podName:ffc02df7-a011-4dff-a92d-b4705e05953c nodeName:}" failed. No retries permitted until 2025-10-09 20:18:14.050028025 +0000 UTC m=+14.504640970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p2jkz" (UniqueName: "kubernetes.io/projected/ffc02df7-a011-4dff-a92d-b4705e05953c-kube-api-access-p2jkz") pod "kubernetes-dashboard-855c9754f9-46jtk" (UID: "ffc02df7-a011-4dff-a92d-b4705e05953c") : failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:18:13 no-preload-020313 kubelet[774]: E1009 20:18:13.554514     774 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:18:13 no-preload-020313 kubelet[774]: E1009 20:18:13.554566     774 projected.go:196] Error preparing data for projected volume kube-api-access-4tdzg for pod kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtq85: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:18:13 no-preload-020313 kubelet[774]: E1009 20:18:13.554641     774 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/31bd6e7a-7a5f-4056-accd-4f55dfce30df-kube-api-access-4tdzg podName:31bd6e7a-7a5f-4056-accd-4f55dfce30df nodeName:}" failed. No retries permitted until 2025-10-09 20:18:14.054620614 +0000 UTC m=+14.509233559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4tdzg" (UniqueName: "kubernetes.io/projected/31bd6e7a-7a5f-4056-accd-4f55dfce30df-kube-api-access-4tdzg") pod "dashboard-metrics-scraper-6ffb444bf9-dtq85" (UID: "31bd6e7a-7a5f-4056-accd-4f55dfce30df") : failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:18:14 no-preload-020313 kubelet[774]: W1009 20:18:14.255398     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861/crio-6cf35c67c122aa3e69dfd6153523853775a7a48888fd8de0fdf846d0caf2bbe6 WatchSource:0}: Error finding container 6cf35c67c122aa3e69dfd6153523853775a7a48888fd8de0fdf846d0caf2bbe6: Status 404 returned error can't find the container with id 6cf35c67c122aa3e69dfd6153523853775a7a48888fd8de0fdf846d0caf2bbe6
	Oct 09 20:18:14 no-preload-020313 kubelet[774]: W1009 20:18:14.271735     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861/crio-ac87e7f1e7b026c6a28c236052d46cc31752aa4610e062e9cf2c31fb01133e3d WatchSource:0}: Error finding container ac87e7f1e7b026c6a28c236052d46cc31752aa4610e062e9cf2c31fb01133e3d: Status 404 returned error can't find the container with id ac87e7f1e7b026c6a28c236052d46cc31752aa4610e062e9cf2c31fb01133e3d
	Oct 09 20:18:21 no-preload-020313 kubelet[774]: I1009 20:18:21.190435     774 scope.go:117] "RemoveContainer" containerID="7b824854b72b754ea7bde958fa635d4205a77467a0847c96276698eeb17623b4"
	Oct 09 20:18:22 no-preload-020313 kubelet[774]: I1009 20:18:22.199442     774 scope.go:117] "RemoveContainer" containerID="7b824854b72b754ea7bde958fa635d4205a77467a0847c96276698eeb17623b4"
	Oct 09 20:18:22 no-preload-020313 kubelet[774]: I1009 20:18:22.200047     774 scope.go:117] "RemoveContainer" containerID="0afd5a0006249b39174c994b4216b2b676030afc2da532d4d2ccbb7240b6bcf7"
	Oct 09 20:18:22 no-preload-020313 kubelet[774]: E1009 20:18:22.204830     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtq85_kubernetes-dashboard(31bd6e7a-7a5f-4056-accd-4f55dfce30df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtq85" podUID="31bd6e7a-7a5f-4056-accd-4f55dfce30df"
	Oct 09 20:18:23 no-preload-020313 kubelet[774]: I1009 20:18:23.203819     774 scope.go:117] "RemoveContainer" containerID="0afd5a0006249b39174c994b4216b2b676030afc2da532d4d2ccbb7240b6bcf7"
	Oct 09 20:18:23 no-preload-020313 kubelet[774]: E1009 20:18:23.204130     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtq85_kubernetes-dashboard(31bd6e7a-7a5f-4056-accd-4f55dfce30df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtq85" podUID="31bd6e7a-7a5f-4056-accd-4f55dfce30df"
	Oct 09 20:18:24 no-preload-020313 kubelet[774]: I1009 20:18:24.206297     774 scope.go:117] "RemoveContainer" containerID="0afd5a0006249b39174c994b4216b2b676030afc2da532d4d2ccbb7240b6bcf7"
	Oct 09 20:18:24 no-preload-020313 kubelet[774]: E1009 20:18:24.206465     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtq85_kubernetes-dashboard(31bd6e7a-7a5f-4056-accd-4f55dfce30df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtq85" podUID="31bd6e7a-7a5f-4056-accd-4f55dfce30df"
	Oct 09 20:18:38 no-preload-020313 kubelet[774]: I1009 20:18:38.955605     774 scope.go:117] "RemoveContainer" containerID="0afd5a0006249b39174c994b4216b2b676030afc2da532d4d2ccbb7240b6bcf7"
	Oct 09 20:18:39 no-preload-020313 kubelet[774]: I1009 20:18:39.247875     774 scope.go:117] "RemoveContainer" containerID="0afd5a0006249b39174c994b4216b2b676030afc2da532d4d2ccbb7240b6bcf7"
	Oct 09 20:18:39 no-preload-020313 kubelet[774]: I1009 20:18:39.248183     774 scope.go:117] "RemoveContainer" containerID="874aa1307bd23de55196930ba25ea04fa85d47795dbd099fb33715b82b0ca793"
	Oct 09 20:18:39 no-preload-020313 kubelet[774]: E1009 20:18:39.248345     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtq85_kubernetes-dashboard(31bd6e7a-7a5f-4056-accd-4f55dfce30df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtq85" podUID="31bd6e7a-7a5f-4056-accd-4f55dfce30df"
	Oct 09 20:18:39 no-preload-020313 kubelet[774]: I1009 20:18:39.258947     774 scope.go:117] "RemoveContainer" containerID="cfac8e5ac3da24e22eb9c6cef2647c4b3078ab69fc092c7b1a73d4bc627d2f52"
	Oct 09 20:18:39 no-preload-020313 kubelet[774]: I1009 20:18:39.277624     774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-46jtk" podStartSLOduration=13.167354224 podStartE2EDuration="27.277605138s" podCreationTimestamp="2025-10-09 20:18:12 +0000 UTC" firstStartedPulling="2025-10-09 20:18:14.277329532 +0000 UTC m=+14.731942469" lastFinishedPulling="2025-10-09 20:18:28.387580446 +0000 UTC m=+28.842193383" observedRunningTime="2025-10-09 20:18:29.246684578 +0000 UTC m=+29.701297514" watchObservedRunningTime="2025-10-09 20:18:39.277605138 +0000 UTC m=+39.732218083"
	Oct 09 20:18:44 no-preload-020313 kubelet[774]: I1009 20:18:44.193190     774 scope.go:117] "RemoveContainer" containerID="874aa1307bd23de55196930ba25ea04fa85d47795dbd099fb33715b82b0ca793"
	Oct 09 20:18:44 no-preload-020313 kubelet[774]: E1009 20:18:44.193889     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtq85_kubernetes-dashboard(31bd6e7a-7a5f-4056-accd-4f55dfce30df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtq85" podUID="31bd6e7a-7a5f-4056-accd-4f55dfce30df"
	Oct 09 20:18:57 no-preload-020313 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 20:18:57 no-preload-020313 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 20:18:57 no-preload-020313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [fe7a54433b35090ac26deba9c9b4e3b51e7532d0b41a463ca4aa4968c8781c7f] <==
	2025/10/09 20:18:28 Using namespace: kubernetes-dashboard
	2025/10/09 20:18:28 Using in-cluster config to connect to apiserver
	2025/10/09 20:18:28 Using secret token for csrf signing
	2025/10/09 20:18:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/09 20:18:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/09 20:18:28 Successful initial request to the apiserver, version: v1.34.1
	2025/10/09 20:18:28 Generating JWE encryption key
	2025/10/09 20:18:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/09 20:18:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/09 20:18:29 Initializing JWE encryption key from synchronized object
	2025/10/09 20:18:29 Creating in-cluster Sidecar client
	2025/10/09 20:18:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 20:18:29 Serving insecurely on HTTP port: 9090
	2025/10/09 20:18:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 20:18:28 Starting overwatch
	
	
	==> storage-provisioner [3d32dbce2cc613f987b4189a71d62e455a6390b309ef522069aa954c7269e07b] <==
	I1009 20:18:39.322187       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:18:39.335878       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:18:39.335938       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 20:18:39.341047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:18:42.796537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:18:47.057075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:18:50.656151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:18:53.710703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:18:56.732722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:18:56.738314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:18:56.738504       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:18:56.738904       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2df7659d-a29d-4122-8f28-18add9557e18", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-020313_e5f27ae7-2a2d-4dc1-a83b-d251f668aa62 became leader
	I1009 20:18:56.738989       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-020313_e5f27ae7-2a2d-4dc1-a83b-d251f668aa62!
	W1009 20:18:56.743729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:18:56.757602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:18:56.839203       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-020313_e5f27ae7-2a2d-4dc1-a83b-d251f668aa62!
	W1009 20:18:58.762084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:18:58.767373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:00.774917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:00.781013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cfac8e5ac3da24e22eb9c6cef2647c4b3078ab69fc092c7b1a73d4bc627d2f52] <==
	I1009 20:18:08.998705       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 20:18:39.179933       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-020313 -n no-preload-020313
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-020313 -n no-preload-020313: exit status 2 (398.028967ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-020313 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-020313
helpers_test.go:243: (dbg) docker inspect no-preload-020313:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861",
	        "Created": "2025-10-09T20:16:11.761091001Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 485746,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:17:52.1087081Z",
	            "FinishedAt": "2025-10-09T20:17:51.097082467Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861/hosts",
	        "LogPath": "/var/lib/docker/containers/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861-json.log",
	        "Name": "/no-preload-020313",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-020313:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-020313",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861",
	                "LowerDir": "/var/lib/docker/overlay2/89e13088c213ea195f3949972cfac4cf35790514b34c96e6ac7e173e96264c21-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89e13088c213ea195f3949972cfac4cf35790514b34c96e6ac7e173e96264c21/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89e13088c213ea195f3949972cfac4cf35790514b34c96e6ac7e173e96264c21/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89e13088c213ea195f3949972cfac4cf35790514b34c96e6ac7e173e96264c21/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-020313",
	                "Source": "/var/lib/docker/volumes/no-preload-020313/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-020313",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-020313",
	                "name.minikube.sigs.k8s.io": "no-preload-020313",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f8a9b7a1c41c18467eaecea135d4540a1086c426b7aa2c4bea4a0559b6a0a27",
	            "SandboxKey": "/var/run/docker/netns/8f8a9b7a1c41",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-020313": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:d7:ce:42:d0:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e50c4d176bfa3eef4ff1ee9bca0047e351ec3aec36a4229f03c93ea4e9e653dd",
	                    "EndpointID": "d069caa9e0dc0a399cacbdbeedbdb6e5d8d58aa404f272e52fcf815989963c6e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-020313",
	                        "5f4dc51ee851"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-020313 -n no-preload-020313
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-020313 -n no-preload-020313: exit status 2 (379.457405ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-020313 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-020313 logs -n 25: (1.209889239s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ start   │ -p cert-expiration-282540 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-282540   │ jenkins │ v1.37.0 │ 09 Oct 25 20:12 UTC │ 09 Oct 25 20:12 UTC │
	│ delete  │ -p force-systemd-env-242564                                                                                                                                                                                                                   │ force-systemd-env-242564 │ jenkins │ v1.37.0 │ 09 Oct 25 20:14 UTC │ 09 Oct 25 20:14 UTC │
	│ start   │ -p cert-options-038875 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-038875      │ jenkins │ v1.37.0 │ 09 Oct 25 20:14 UTC │ 09 Oct 25 20:15 UTC │
	│ ssh     │ cert-options-038875 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-038875      │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ ssh     │ -p cert-options-038875 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-038875      │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ delete  │ -p cert-options-038875                                                                                                                                                                                                                        │ cert-options-038875      │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ start   │ -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p cert-expiration-282540 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-282540   │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:16 UTC │
	│ delete  │ -p cert-expiration-282540                                                                                                                                                                                                                     │ cert-expiration-282540   │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020313        │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:17 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-670649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │                     │
	│ stop    │ -p old-k8s-version-670649 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-670649 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-020313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020313        │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	│ stop    │ -p no-preload-020313 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-020313        │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ image   │ old-k8s-version-670649 image list --format=json                                                                                                                                                                                               │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ pause   │ -p old-k8s-version-670649 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-020313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-020313        │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ start   │ -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020313        │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:18 UTC │
	│ delete  │ -p old-k8s-version-670649                                                                                                                                                                                                                     │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ delete  │ -p old-k8s-version-670649                                                                                                                                                                                                                     │ old-k8s-version-670649   │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:18 UTC │
	│ start   │ -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-565110       │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │                     │
	│ image   │ no-preload-020313 image list --format=json                                                                                                                                                                                                    │ no-preload-020313        │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:18 UTC │
	│ pause   │ -p no-preload-020313 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-020313        │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:18:00
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:18:00.590755  487957 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:18:00.591024  487957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:18:00.591052  487957 out.go:374] Setting ErrFile to fd 2...
	I1009 20:18:00.591103  487957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:18:00.600812  487957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:18:00.601547  487957 out.go:368] Setting JSON to false
	I1009 20:18:00.602578  487957 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10820,"bootTime":1760030261,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:18:00.602791  487957 start.go:143] virtualization:  
	I1009 20:18:00.609492  487957 out.go:179] * [embed-certs-565110] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:18:00.622425  487957 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:18:00.622792  487957 notify.go:221] Checking for updates...
	I1009 20:18:00.639698  487957 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:18:00.643468  487957 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:18:00.647174  487957 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:18:00.650799  487957 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:18:00.654297  487957 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:18:00.658339  487957 config.go:182] Loaded profile config "no-preload-020313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:18:00.658502  487957 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:18:00.710324  487957 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:18:00.710528  487957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:18:00.851868  487957 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 20:18:00.839130737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:18:00.852005  487957 docker.go:319] overlay module found
	I1009 20:18:00.855728  487957 out.go:179] * Using the docker driver based on user configuration
	I1009 20:18:00.857839  487957 start.go:309] selected driver: docker
	I1009 20:18:00.857861  487957 start.go:930] validating driver "docker" against <nil>
	I1009 20:18:00.857876  487957 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:18:00.858636  487957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:18:00.991203  487957 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 20:18:00.976990516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:18:00.991377  487957 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 20:18:00.991643  487957 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:18:00.995088  487957 out.go:179] * Using Docker driver with root privileges
	I1009 20:18:00.998145  487957 cni.go:84] Creating CNI manager for ""
	I1009 20:18:00.998224  487957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:18:00.998236  487957 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 20:18:00.998315  487957 start.go:353] cluster config:
	{Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:18:01.001794  487957 out.go:179] * Starting "embed-certs-565110" primary control-plane node in "embed-certs-565110" cluster
	I1009 20:18:01.004911  487957 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:18:01.008082  487957 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:18:01.010981  487957 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:18:01.011043  487957 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 20:18:01.011053  487957 cache.go:58] Caching tarball of preloaded images
	I1009 20:18:01.011102  487957 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:18:01.011406  487957 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 20:18:01.011420  487957 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 20:18:01.011532  487957 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/config.json ...
	I1009 20:18:01.011551  487957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/config.json: {Name:mk0c43fa37b9dbd5eccdb406ccdff1b49370e0a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:01.049100  487957 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:18:01.049142  487957 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:18:01.049156  487957 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:18:01.049180  487957 start.go:361] acquireMachinesLock for embed-certs-565110: {Name:mk32ec325145c7dbf708685a0b7d3c4450230c14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:18:01.049274  487957 start.go:365] duration metric: took 79.254µs to acquireMachinesLock for "embed-certs-565110"
	I1009 20:18:01.049300  487957 start.go:94] Provisioning new machine with config: &{Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:18:01.049378  487957 start.go:126] createHost starting for "" (driver="docker")
	I1009 20:17:59.144105  485563 cli_runner.go:164] Run: docker network inspect no-preload-020313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:17:59.172476  485563 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:59.176779  485563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:59.191417  485563 kubeadm.go:883] updating cluster {Name:no-preload-020313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:59.191536  485563 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:17:59.191579  485563 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:59.225318  485563 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:17:59.225339  485563 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:17:59.225347  485563 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1009 20:17:59.225437  485563 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-020313 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-020313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:17:59.225524  485563 ssh_runner.go:195] Run: crio config
	I1009 20:17:59.289760  485563 cni.go:84] Creating CNI manager for ""
	I1009 20:17:59.289786  485563 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:17:59.289808  485563 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 20:17:59.289838  485563 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-020313 NodeName:no-preload-020313 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:17:59.289964  485563 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-020313"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:17:59.290039  485563 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:17:59.304571  485563 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:17:59.304653  485563 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:17:59.313037  485563 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 20:17:59.328174  485563 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:17:59.343088  485563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1009 20:17:59.357695  485563 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:17:59.361905  485563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:59.373260  485563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:59.523133  485563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:17:59.541440  485563 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313 for IP: 192.168.85.2
	I1009 20:17:59.541462  485563 certs.go:195] generating shared ca certs ...
	I1009 20:17:59.541478  485563 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:59.541602  485563 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:17:59.541645  485563 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:17:59.541657  485563 certs.go:257] generating profile certs ...
	I1009 20:17:59.541756  485563 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.key
	I1009 20:17:59.541820  485563 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.key.ff7e88d0
	I1009 20:17:59.541865  485563 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/proxy-client.key
	I1009 20:17:59.541976  485563 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:17:59.542011  485563 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:17:59.542022  485563 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:17:59.542049  485563 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:17:59.542077  485563 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:17:59.542097  485563 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:17:59.542140  485563 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:17:59.542726  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:17:59.617731  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:17:59.643397  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:17:59.707301  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:17:59.761367  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 20:17:59.827017  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:17:59.887393  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:17:59.944913  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:17:59.998409  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:18:00.020574  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:18:00.043425  485563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:18:00.066864  485563 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:18:00.091166  485563 ssh_runner.go:195] Run: openssl version
	I1009 20:18:00.101199  485563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:18:00.129352  485563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:00.220247  485563 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:00.220335  485563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:00.328864  485563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:18:00.360583  485563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:18:00.376978  485563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:18:00.399128  485563 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:18:00.399228  485563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:18:00.506640  485563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:18:00.516429  485563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:18:00.606675  485563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:18:00.620322  485563 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:18:00.620389  485563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:18:00.688131  485563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:18:00.744809  485563 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:18:00.755206  485563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:18:00.854489  485563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:18:01.057704  485563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:18:01.175947  485563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:18:01.290460  485563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:18:01.366479  485563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:18:01.481489  485563 kubeadm.go:400] StartCluster: {Name:no-preload-020313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:18:01.481579  485563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:18:01.481649  485563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:18:01.605366  485563 cri.go:89] found id: "22b87e577d7a8f108e7d77d095e44d5b3392e21fb7da8260fe838b3e930b2229"
	I1009 20:18:01.605392  485563 cri.go:89] found id: "5abd9717aed8a5baaa24ce4dbac3f6a6652f3d3b84cb43dc09007beee7a84423"
	I1009 20:18:01.605398  485563 cri.go:89] found id: "bdcbfecca01ea6e3e0ee392800df2ec67f04ed687955da27cce3925008d3bc5a"
	I1009 20:18:01.605401  485563 cri.go:89] found id: "d49e0cc690dcab668ca06327548e322b4d012301c7ad96444959726efbca4e09"
	I1009 20:18:01.605404  485563 cri.go:89] found id: ""
	I1009 20:18:01.605457  485563 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 20:18:01.635114  485563 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:18:01Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:18:01.635196  485563 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:18:01.644565  485563 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 20:18:01.644582  485563 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 20:18:01.644643  485563 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:18:01.653658  485563 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:18:01.654048  485563 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-020313" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:18:01.654137  485563 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-294150/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-020313" cluster setting kubeconfig missing "no-preload-020313" context setting]
	I1009 20:18:01.654442  485563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:01.656308  485563 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:18:01.669005  485563 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1009 20:18:01.669038  485563 kubeadm.go:601] duration metric: took 24.449837ms to restartPrimaryControlPlane
	I1009 20:18:01.669047  485563 kubeadm.go:402] duration metric: took 187.567416ms to StartCluster
	I1009 20:18:01.669062  485563 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:01.669219  485563 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:18:01.669862  485563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:01.670072  485563 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:18:01.670465  485563 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:18:01.670539  485563 addons.go:69] Setting storage-provisioner=true in profile "no-preload-020313"
	I1009 20:18:01.670553  485563 addons.go:238] Setting addon storage-provisioner=true in "no-preload-020313"
	W1009 20:18:01.670559  485563 addons.go:247] addon storage-provisioner should already be in state true
	I1009 20:18:01.670579  485563 host.go:66] Checking if "no-preload-020313" exists ...
	I1009 20:18:01.671074  485563 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Status}}
	I1009 20:18:01.671682  485563 config.go:182] Loaded profile config "no-preload-020313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:18:01.671776  485563 addons.go:69] Setting dashboard=true in profile "no-preload-020313"
	I1009 20:18:01.671811  485563 addons.go:238] Setting addon dashboard=true in "no-preload-020313"
	W1009 20:18:01.671834  485563 addons.go:247] addon dashboard should already be in state true
	I1009 20:18:01.671886  485563 host.go:66] Checking if "no-preload-020313" exists ...
	I1009 20:18:01.672710  485563 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Status}}
	I1009 20:18:01.674029  485563 addons.go:69] Setting default-storageclass=true in profile "no-preload-020313"
	I1009 20:18:01.674063  485563 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-020313"
	I1009 20:18:01.674644  485563 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Status}}
	I1009 20:18:01.683962  485563 out.go:179] * Verifying Kubernetes components...
	I1009 20:18:01.690151  485563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:01.774646  485563 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:01.777516  485563 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:01.777538  485563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:18:01.777602  485563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:18:01.786044  485563 addons.go:238] Setting addon default-storageclass=true in "no-preload-020313"
	W1009 20:18:01.786069  485563 addons.go:247] addon default-storageclass should already be in state true
	I1009 20:18:01.786094  485563 host.go:66] Checking if "no-preload-020313" exists ...
	I1009 20:18:01.786508  485563 cli_runner.go:164] Run: docker container inspect no-preload-020313 --format={{.State.Status}}
	I1009 20:18:01.787836  485563 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 20:18:01.792027  485563 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 20:18:01.053086  487957 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 20:18:01.053443  487957 start.go:160] libmachine.API.Create for "embed-certs-565110" (driver="docker")
	I1009 20:18:01.053522  487957 client.go:168] LocalClient.Create starting
	I1009 20:18:01.053664  487957 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem
	I1009 20:18:01.053737  487957 main.go:141] libmachine: Decoding PEM data...
	I1009 20:18:01.053774  487957 main.go:141] libmachine: Parsing certificate...
	I1009 20:18:01.053863  487957 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem
	I1009 20:18:01.053922  487957 main.go:141] libmachine: Decoding PEM data...
	I1009 20:18:01.053949  487957 main.go:141] libmachine: Parsing certificate...
	I1009 20:18:01.054442  487957 cli_runner.go:164] Run: docker network inspect embed-certs-565110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 20:18:01.081290  487957 cli_runner.go:211] docker network inspect embed-certs-565110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 20:18:01.081381  487957 network_create.go:284] running [docker network inspect embed-certs-565110] to gather additional debugging logs...
	I1009 20:18:01.081404  487957 cli_runner.go:164] Run: docker network inspect embed-certs-565110
	W1009 20:18:01.109258  487957 cli_runner.go:211] docker network inspect embed-certs-565110 returned with exit code 1
	I1009 20:18:01.109287  487957 network_create.go:287] error running [docker network inspect embed-certs-565110]: docker network inspect embed-certs-565110: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-565110 not found
	I1009 20:18:01.109301  487957 network_create.go:289] output of [docker network inspect embed-certs-565110]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-565110 not found
	
	** /stderr **
	I1009 20:18:01.109400  487957 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:18:01.142304  487957 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3847a6577684 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:b5:e6:7d:c7:ad} reservation:<nil>}
	I1009 20:18:01.142682  487957 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5742e12e0dad IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9e:82:91:fd:a6:fb} reservation:<nil>}
	I1009 20:18:01.142904  487957 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-11b099636187 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:bb:e5:1b:6d:a2} reservation:<nil>}
	I1009 20:18:01.143323  487957 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a32d0}
	I1009 20:18:01.143342  487957 network_create.go:124] attempt to create docker network embed-certs-565110 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1009 20:18:01.143400  487957 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-565110 embed-certs-565110
	I1009 20:18:01.236989  487957 network_create.go:108] docker network embed-certs-565110 192.168.76.0/24 created
	I1009 20:18:01.237032  487957 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-565110" container
	I1009 20:18:01.237312  487957 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 20:18:01.270871  487957 cli_runner.go:164] Run: docker volume create embed-certs-565110 --label name.minikube.sigs.k8s.io=embed-certs-565110 --label created_by.minikube.sigs.k8s.io=true
	I1009 20:18:01.299377  487957 oci.go:103] Successfully created a docker volume embed-certs-565110
	I1009 20:18:01.299478  487957 cli_runner.go:164] Run: docker run --rm --name embed-certs-565110-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-565110 --entrypoint /usr/bin/test -v embed-certs-565110:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 20:18:02.189014  487957 oci.go:107] Successfully prepared a docker volume embed-certs-565110
	I1009 20:18:02.189060  487957 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:18:02.189079  487957 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 20:18:02.189165  487957 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-565110:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 20:18:01.799264  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 20:18:01.799306  485563 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 20:18:01.799382  485563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:18:01.824878  485563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/no-preload-020313/id_rsa Username:docker}
	I1009 20:18:01.839295  485563 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:01.839319  485563 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:18:01.839380  485563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020313
	I1009 20:18:01.868643  485563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/no-preload-020313/id_rsa Username:docker}
	I1009 20:18:01.885200  485563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/no-preload-020313/id_rsa Username:docker}
	I1009 20:18:02.128754  485563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:02.234903  485563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:02.344181  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 20:18:02.344203  485563 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 20:18:02.479008  485563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:02.487768  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 20:18:02.487793  485563 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 20:18:02.544681  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 20:18:02.544706  485563 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 20:18:02.643290  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 20:18:02.643315  485563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 20:18:02.746960  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 20:18:02.747004  485563 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 20:18:02.775323  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 20:18:02.775362  485563 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 20:18:02.807163  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 20:18:02.807193  485563 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 20:18:02.831381  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 20:18:02.831408  485563 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 20:18:02.859780  485563 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 20:18:02.859808  485563 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 20:18:02.924500  485563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 20:18:10.031390  485563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.902603361s)
	I1009 20:18:10.031448  485563 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.796524649s)
	I1009 20:18:10.031491  485563 node_ready.go:35] waiting up to 6m0s for node "no-preload-020313" to be "Ready" ...
	I1009 20:18:10.031843  485563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.552754862s)
	I1009 20:18:10.032131  485563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.107584389s)
	I1009 20:18:10.037392  485563 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-020313 addons enable metrics-server
	
	I1009 20:18:10.082370  485563 node_ready.go:49] node "no-preload-020313" is "Ready"
	I1009 20:18:10.082403  485563 node_ready.go:38] duration metric: took 50.888327ms for node "no-preload-020313" to be "Ready" ...
	I1009 20:18:10.082418  485563 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:18:10.082478  485563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:10.093413  485563 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1009 20:18:08.074561  487957 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-565110:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (5.885358999s)
	I1009 20:18:08.074593  487957 kic.go:203] duration metric: took 5.885510985s to extract preloaded images to volume ...
	W1009 20:18:08.074741  487957 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 20:18:08.074851  487957 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 20:18:08.179323  487957 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-565110 --name embed-certs-565110 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-565110 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-565110 --network embed-certs-565110 --ip 192.168.76.2 --volume embed-certs-565110:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 20:18:08.617473  487957 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Running}}
	I1009 20:18:08.645855  487957 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:18:08.674614  487957 cli_runner.go:164] Run: docker exec embed-certs-565110 stat /var/lib/dpkg/alternatives/iptables
	I1009 20:18:08.759637  487957 oci.go:144] the created container "embed-certs-565110" has a running status.
	I1009 20:18:08.759672  487957 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa...
	I1009 20:18:09.057433  487957 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 20:18:09.084846  487957 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:18:09.112242  487957 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 20:18:09.112266  487957 kic_runner.go:114] Args: [docker exec --privileged embed-certs-565110 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 20:18:09.208922  487957 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:18:09.231987  487957 machine.go:93] provisionDockerMachine start ...
	I1009 20:18:09.232111  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:09.258909  487957 main.go:141] libmachine: Using SSH client type: native
	I1009 20:18:09.259265  487957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33436 <nil> <nil>}
	I1009 20:18:09.259281  487957 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:18:09.259927  487957 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 20:18:10.097293  485563 addons.go:514] duration metric: took 8.426811628s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1009 20:18:10.109605  485563 api_server.go:72] duration metric: took 8.439504688s to wait for apiserver process to appear ...
	I1009 20:18:10.109628  485563 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:18:10.109647  485563 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 20:18:10.144893  485563 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:18:10.144918  485563 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:18:10.610265  485563 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1009 20:18:10.618589  485563 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1009 20:18:10.619785  485563 api_server.go:141] control plane version: v1.34.1
	I1009 20:18:10.619814  485563 api_server.go:131] duration metric: took 510.179593ms to wait for apiserver health ...
	I1009 20:18:10.619825  485563 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:18:10.624021  485563 system_pods.go:59] 8 kube-system pods found
	I1009 20:18:10.624065  485563 system_pods.go:61] "coredns-66bc5c9577-h7jz6" [50ef033a-7db2-4326-a6d6-574c692f50ba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:18:10.624075  485563 system_pods.go:61] "etcd-no-preload-020313" [ffe41bc4-bdd7-4da8-9781-364de0d17db9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:18:10.624080  485563 system_pods.go:61] "kindnet-47kwl" [60a32ed3-a01b-47ee-9128-d0763b3502ee] Running
	I1009 20:18:10.624087  485563 system_pods.go:61] "kube-apiserver-no-preload-020313" [d8f0991e-2fdd-4635-b144-99bfccfc61c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:18:10.624097  485563 system_pods.go:61] "kube-controller-manager-no-preload-020313" [a14b0780-83e0-4076-9076-c673c69ee034] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:18:10.624105  485563 system_pods.go:61] "kube-proxy-cd5v6" [7843ebcc-c450-40f9-b0dd-6cb09dd70a81] Running
	I1009 20:18:10.624112  485563 system_pods.go:61] "kube-scheduler-no-preload-020313" [a3f3beaf-2476-4cc8-845c-e0230d0fb499] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:18:10.624121  485563 system_pods.go:61] "storage-provisioner" [03ca5595-692b-4e09-a599-439b385749c1] Running
	I1009 20:18:10.624127  485563 system_pods.go:74] duration metric: took 4.295863ms to wait for pod list to return data ...
	I1009 20:18:10.624152  485563 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:18:10.627061  485563 default_sa.go:45] found service account: "default"
	I1009 20:18:10.627089  485563 default_sa.go:55] duration metric: took 2.930232ms for default service account to be created ...
	I1009 20:18:10.627099  485563 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:18:10.630551  485563 system_pods.go:86] 8 kube-system pods found
	I1009 20:18:10.630635  485563 system_pods.go:89] "coredns-66bc5c9577-h7jz6" [50ef033a-7db2-4326-a6d6-574c692f50ba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:18:10.630657  485563 system_pods.go:89] "etcd-no-preload-020313" [ffe41bc4-bdd7-4da8-9781-364de0d17db9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:18:10.630664  485563 system_pods.go:89] "kindnet-47kwl" [60a32ed3-a01b-47ee-9128-d0763b3502ee] Running
	I1009 20:18:10.630671  485563 system_pods.go:89] "kube-apiserver-no-preload-020313" [d8f0991e-2fdd-4635-b144-99bfccfc61c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:18:10.630680  485563 system_pods.go:89] "kube-controller-manager-no-preload-020313" [a14b0780-83e0-4076-9076-c673c69ee034] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:18:10.630709  485563 system_pods.go:89] "kube-proxy-cd5v6" [7843ebcc-c450-40f9-b0dd-6cb09dd70a81] Running
	I1009 20:18:10.630731  485563 system_pods.go:89] "kube-scheduler-no-preload-020313" [a3f3beaf-2476-4cc8-845c-e0230d0fb499] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:18:10.630743  485563 system_pods.go:89] "storage-provisioner" [03ca5595-692b-4e09-a599-439b385749c1] Running
	I1009 20:18:10.630750  485563 system_pods.go:126] duration metric: took 3.645302ms to wait for k8s-apps to be running ...
	I1009 20:18:10.630762  485563 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:18:10.630834  485563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:18:10.646461  485563 system_svc.go:56] duration metric: took 15.690099ms WaitForService to wait for kubelet
	I1009 20:18:10.646490  485563 kubeadm.go:586] duration metric: took 8.976395522s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:18:10.646510  485563 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:18:10.649916  485563 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:18:10.649949  485563 node_conditions.go:123] node cpu capacity is 2
	I1009 20:18:10.649964  485563 node_conditions.go:105] duration metric: took 3.448869ms to run NodePressure ...
	I1009 20:18:10.649977  485563 start.go:242] waiting for startup goroutines ...
	I1009 20:18:10.649985  485563 start.go:247] waiting for cluster config update ...
	I1009 20:18:10.650001  485563 start.go:256] writing updated cluster config ...
	I1009 20:18:10.650295  485563 ssh_runner.go:195] Run: rm -f paused
	I1009 20:18:10.654360  485563 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:18:10.658044  485563 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h7jz6" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:12.408986  487957 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-565110
	
	I1009 20:18:12.409068  487957 ubuntu.go:182] provisioning hostname "embed-certs-565110"
	I1009 20:18:12.409162  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:12.443590  487957 main.go:141] libmachine: Using SSH client type: native
	I1009 20:18:12.443907  487957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33436 <nil> <nil>}
	I1009 20:18:12.443926  487957 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-565110 && echo "embed-certs-565110" | sudo tee /etc/hostname
	I1009 20:18:12.614528  487957 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-565110
	
	I1009 20:18:12.614655  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:12.631542  487957 main.go:141] libmachine: Using SSH client type: native
	I1009 20:18:12.631869  487957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33436 <nil> <nil>}
	I1009 20:18:12.631892  487957 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-565110' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-565110/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-565110' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:18:12.777423  487957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:18:12.777514  487957 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:18:12.777567  487957 ubuntu.go:190] setting up certificates
	I1009 20:18:12.777603  487957 provision.go:84] configureAuth start
	I1009 20:18:12.777708  487957 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-565110
	I1009 20:18:12.795597  487957 provision.go:143] copyHostCerts
	I1009 20:18:12.795662  487957 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:18:12.795672  487957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:18:12.795748  487957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:18:12.795855  487957 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:18:12.795860  487957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:18:12.795888  487957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:18:12.795945  487957 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:18:12.795950  487957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:18:12.795974  487957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:18:12.796026  487957 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.embed-certs-565110 san=[127.0.0.1 192.168.76.2 embed-certs-565110 localhost minikube]
	I1009 20:18:13.060338  487957 provision.go:177] copyRemoteCerts
	I1009 20:18:13.060419  487957 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:18:13.060466  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:13.077461  487957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:18:13.180557  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:18:13.198137  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 20:18:13.215683  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:18:13.232691  487957 provision.go:87] duration metric: took 455.050935ms to configureAuth
	I1009 20:18:13.232720  487957 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:18:13.232915  487957 config.go:182] Loaded profile config "embed-certs-565110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:18:13.233030  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:13.249905  487957 main.go:141] libmachine: Using SSH client type: native
	I1009 20:18:13.250220  487957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33436 <nil> <nil>}
	I1009 20:18:13.250240  487957 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:18:13.596481  487957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:18:13.596506  487957 machine.go:96] duration metric: took 4.364493205s to provisionDockerMachine
	I1009 20:18:13.596516  487957 client.go:171] duration metric: took 12.542973622s to LocalClient.Create
	I1009 20:18:13.596531  487957 start.go:168] duration metric: took 12.543089792s to libmachine.API.Create "embed-certs-565110"
	I1009 20:18:13.596538  487957 start.go:294] postStartSetup for "embed-certs-565110" (driver="docker")
	I1009 20:18:13.596549  487957 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:18:13.596615  487957 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:18:13.596676  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:13.617787  487957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:18:13.725765  487957 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:18:13.729132  487957 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:18:13.729161  487957 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:18:13.729172  487957 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:18:13.729226  487957 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:18:13.729312  487957 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:18:13.729413  487957 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:18:13.737017  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:18:13.756735  487957 start.go:297] duration metric: took 160.181177ms for postStartSetup
	I1009 20:18:13.757184  487957 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-565110
	I1009 20:18:13.773952  487957 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/config.json ...
	I1009 20:18:13.774246  487957 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:18:13.774302  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:13.790882  487957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:18:13.890407  487957 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:18:13.895410  487957 start.go:129] duration metric: took 12.846015538s to createHost
	I1009 20:18:13.895435  487957 start.go:84] releasing machines lock for "embed-certs-565110", held for 12.84615204s
	I1009 20:18:13.895507  487957 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-565110
	I1009 20:18:13.912673  487957 ssh_runner.go:195] Run: cat /version.json
	I1009 20:18:13.912711  487957 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:18:13.912724  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:13.912764  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:13.934714  487957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:18:13.935171  487957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:18:14.037180  487957 ssh_runner.go:195] Run: systemctl --version
	I1009 20:18:14.130910  487957 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:18:14.174802  487957 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:18:14.179297  487957 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:18:14.179375  487957 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:18:14.222407  487957 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 20:18:14.222440  487957 start.go:496] detecting cgroup driver to use...
	I1009 20:18:14.222471  487957 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 20:18:14.222523  487957 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:18:14.244633  487957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:18:14.262447  487957 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:18:14.262509  487957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:18:14.284384  487957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:18:14.306165  487957 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:18:14.434469  487957 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:18:14.610892  487957 docker.go:234] disabling docker service ...
	I1009 20:18:14.611108  487957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:18:14.639331  487957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:18:14.673968  487957 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:18:14.846807  487957 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:18:15.038156  487957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:18:15.065809  487957 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:18:15.087948  487957 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:18:15.088021  487957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:18:15.103023  487957 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:18:15.103096  487957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:18:15.115108  487957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:18:15.128438  487957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:18:15.140537  487957 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:18:15.154873  487957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:18:15.174178  487957 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:18:15.188211  487957 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:18:15.198267  487957 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:18:15.206352  487957 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:18:15.214468  487957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:15.382207  487957 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:18:15.585151  487957 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:18:15.585283  487957 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:18:15.591761  487957 start.go:564] Will wait 60s for crictl version
	I1009 20:18:15.591887  487957 ssh_runner.go:195] Run: which crictl
	I1009 20:18:15.600356  487957 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:18:15.635269  487957 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:18:15.635366  487957 ssh_runner.go:195] Run: crio --version
	I1009 20:18:15.687811  487957 ssh_runner.go:195] Run: crio --version
	I1009 20:18:15.729423  487957 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1009 20:18:12.663946  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	W1009 20:18:14.664367  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	W1009 20:18:16.704606  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	I1009 20:18:15.732452  487957 cli_runner.go:164] Run: docker network inspect embed-certs-565110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:18:15.749476  487957 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 20:18:15.754003  487957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:18:15.767169  487957 kubeadm.go:883] updating cluster {Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:18:15.767299  487957 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:18:15.767358  487957 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:18:15.822609  487957 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:18:15.822629  487957 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:18:15.822684  487957 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:18:15.859594  487957 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:18:15.859620  487957 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:18:15.859628  487957 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 20:18:15.859721  487957 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-565110 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:18:15.859808  487957 ssh_runner.go:195] Run: crio config
	I1009 20:18:15.969678  487957 cni.go:84] Creating CNI manager for ""
	I1009 20:18:15.969751  487957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:18:15.969781  487957 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 20:18:15.969838  487957 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-565110 NodeName:embed-certs-565110 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:18:15.970029  487957 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-565110"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:18:15.970149  487957 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:18:15.981678  487957 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:18:15.981829  487957 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:18:15.990181  487957 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1009 20:18:16.005845  487957 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:18:16.022617  487957 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1009 20:18:16.037942  487957 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:18:16.042338  487957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:18:16.053143  487957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:16.220442  487957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:16.247946  487957 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110 for IP: 192.168.76.2
	I1009 20:18:16.248015  487957 certs.go:195] generating shared ca certs ...
	I1009 20:18:16.248046  487957 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:16.248264  487957 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:18:16.248353  487957 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:18:16.248383  487957 certs.go:257] generating profile certs ...
	I1009 20:18:16.248489  487957 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/client.key
	I1009 20:18:16.248533  487957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/client.crt with IP's: []
	I1009 20:18:17.725333  487957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/client.crt ...
	I1009 20:18:17.725418  487957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/client.crt: {Name:mk27e25d4844f4e5256972d00578c76ca030ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:17.725622  487957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/client.key ...
	I1009 20:18:17.725657  487957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/client.key: {Name:mk5baef3c8c7f4cffd2455d6251cc2bf43177213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:17.725803  487957 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key.e7b9ab9d
	I1009 20:18:17.725842  487957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.crt.e7b9ab9d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1009 20:18:18.302153  487957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.crt.e7b9ab9d ...
	I1009 20:18:18.302230  487957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.crt.e7b9ab9d: {Name:mkd9ba51107f17a2c6354d627d3a1138b49b247d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:18.302459  487957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key.e7b9ab9d ...
	I1009 20:18:18.302497  487957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key.e7b9ab9d: {Name:mk616b2f887fc7903bd26550f92199671bbd9e18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:18.302638  487957 certs.go:382] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.crt.e7b9ab9d -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.crt
	I1009 20:18:18.302762  487957 certs.go:386] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key.e7b9ab9d -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key
	I1009 20:18:18.302852  487957 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.key
	I1009 20:18:18.302901  487957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.crt with IP's: []
	I1009 20:18:18.432079  487957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.crt ...
	I1009 20:18:18.432167  487957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.crt: {Name:mkc1d0ca280e0c2bbae28c8147a5d7e32b0c826c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:18.432394  487957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.key ...
	I1009 20:18:18.432432  487957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.key: {Name:mk1d8c2aaad68b1b10dec12dbce954706b896254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:18.432682  487957 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:18:18.432751  487957 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:18:18.432779  487957 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:18:18.432845  487957 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:18:18.432916  487957 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:18:18.432963  487957 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:18:18.433051  487957 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:18:18.433726  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:18:18.453982  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:18:18.477046  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:18:18.498678  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:18:18.522773  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1009 20:18:18.551727  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:18:18.578927  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:18:18.597242  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:18:18.615529  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:18:18.633720  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:18:18.651679  487957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:18:18.676459  487957 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:18:18.698724  487957 ssh_runner.go:195] Run: openssl version
	I1009 20:18:18.709691  487957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:18:18.730884  487957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:18:18.738917  487957 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:18:18.739000  487957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:18:18.815581  487957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:18:18.825395  487957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:18:18.834887  487957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:18.839799  487957 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:18.839923  487957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:18.883334  487957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:18:18.892460  487957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:18:18.901344  487957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:18:18.906383  487957 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:18:18.906511  487957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:18:18.954801  487957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:18:18.965100  487957 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:18:18.969852  487957 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 20:18:18.969967  487957 kubeadm.go:400] StartCluster: {Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:18:18.970127  487957 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:18:18.970227  487957 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:18:19.005021  487957 cri.go:89] found id: ""
	I1009 20:18:19.005211  487957 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:18:19.017617  487957 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:18:19.026204  487957 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 20:18:19.026321  487957 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:18:19.037313  487957 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:18:19.037383  487957 kubeadm.go:157] found existing configuration files:
	
	I1009 20:18:19.037471  487957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:18:19.046246  487957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:18:19.046372  487957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:18:19.054169  487957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:18:19.063915  487957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:18:19.064026  487957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:18:19.071767  487957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:18:19.080891  487957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:18:19.081005  487957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:18:19.088826  487957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:18:19.097863  487957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:18:19.097977  487957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:18:19.106378  487957 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 20:18:19.161756  487957 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 20:18:19.162271  487957 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 20:18:19.190365  487957 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 20:18:19.190524  487957 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 20:18:19.190597  487957 kubeadm.go:318] OS: Linux
	I1009 20:18:19.190683  487957 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 20:18:19.190768  487957 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 20:18:19.190850  487957 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 20:18:19.190937  487957 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 20:18:19.191019  487957 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 20:18:19.191106  487957 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 20:18:19.191185  487957 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 20:18:19.191273  487957 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 20:18:19.191352  487957 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 20:18:19.301684  487957 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:18:19.301953  487957 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:18:19.302161  487957 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:18:19.321543  487957 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:18:19.330626  487957 out.go:252]   - Generating certificates and keys ...
	I1009 20:18:19.330794  487957 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 20:18:19.330917  487957 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 20:18:19.682795  487957 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 20:18:19.824373  487957 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 20:18:20.062756  487957 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 20:18:20.176731  487957 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 20:18:20.524693  487957 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 20:18:20.526513  487957 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-565110 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1009 20:18:19.163254  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	W1009 20:18:21.165465  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	I1009 20:18:20.906239  487957 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 20:18:20.906932  487957 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-565110 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 20:18:21.173505  487957 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 20:18:21.833685  487957 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 20:18:22.864146  487957 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 20:18:22.864381  487957 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:18:22.996838  487957 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:18:23.380688  487957 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:18:24.382333  487957 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:18:24.718717  487957 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:18:24.976592  487957 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:18:24.977624  487957 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:18:24.980606  487957 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:18:24.985473  487957 out.go:252]   - Booting up control plane ...
	I1009 20:18:24.985583  487957 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:18:24.985664  487957 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:18:24.991659  487957 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:18:25.015936  487957 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:18:25.016067  487957 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 20:18:25.029627  487957 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 20:18:25.029734  487957 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:18:25.029776  487957 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 20:18:25.206386  487957 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:18:25.206513  487957 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1009 20:18:23.167466  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	W1009 20:18:25.667437  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	I1009 20:18:26.208203  487957 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001861031s
	I1009 20:18:26.212100  487957 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 20:18:26.212201  487957 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1009 20:18:26.212295  487957 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 20:18:26.212377  487957 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1009 20:18:27.667559  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	W1009 20:18:30.163546  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	I1009 20:18:31.271882  487957 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.058663339s
	I1009 20:18:32.929952  487957 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.717849361s
	I1009 20:18:34.713557  487957 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.501415519s
	I1009 20:18:34.733451  487957 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 20:18:34.750579  487957 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 20:18:34.763899  487957 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 20:18:34.764118  487957 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-565110 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 20:18:34.775449  487957 kubeadm.go:318] [bootstrap-token] Using token: 2scf2u.5od2xm2wg3arr93y
	I1009 20:18:34.776787  487957 out.go:252]   - Configuring RBAC rules ...
	I1009 20:18:34.776935  487957 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 20:18:34.782520  487957 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 20:18:34.790071  487957 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 20:18:34.795055  487957 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 20:18:34.802262  487957 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 20:18:34.808941  487957 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 20:18:35.121621  487957 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 20:18:35.557009  487957 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 20:18:36.121457  487957 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 20:18:36.122695  487957 kubeadm.go:318] 
	I1009 20:18:36.122777  487957 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 20:18:36.122788  487957 kubeadm.go:318] 
	I1009 20:18:36.122870  487957 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 20:18:36.122879  487957 kubeadm.go:318] 
	I1009 20:18:36.122906  487957 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 20:18:36.122972  487957 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 20:18:36.123030  487957 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 20:18:36.123040  487957 kubeadm.go:318] 
	I1009 20:18:36.123097  487957 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 20:18:36.123105  487957 kubeadm.go:318] 
	I1009 20:18:36.123191  487957 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 20:18:36.123209  487957 kubeadm.go:318] 
	I1009 20:18:36.123285  487957 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 20:18:36.123376  487957 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 20:18:36.123451  487957 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 20:18:36.123457  487957 kubeadm.go:318] 
	I1009 20:18:36.123552  487957 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 20:18:36.123633  487957 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 20:18:36.123638  487957 kubeadm.go:318] 
	I1009 20:18:36.123730  487957 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 2scf2u.5od2xm2wg3arr93y \
	I1009 20:18:36.123839  487957 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e766d16640f098061f552dd476e80ebd3809bd57b4957045222f32c55d34903e \
	I1009 20:18:36.123861  487957 kubeadm.go:318] 	--control-plane 
	I1009 20:18:36.123865  487957 kubeadm.go:318] 
	I1009 20:18:36.123954  487957 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 20:18:36.123959  487957 kubeadm.go:318] 
	I1009 20:18:36.124045  487957 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 2scf2u.5od2xm2wg3arr93y \
	I1009 20:18:36.124152  487957 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e766d16640f098061f552dd476e80ebd3809bd57b4957045222f32c55d34903e 
	I1009 20:18:36.128166  487957 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 20:18:36.128433  487957 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 20:18:36.128556  487957 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:18:36.128619  487957 cni.go:84] Creating CNI manager for ""
	I1009 20:18:36.128635  487957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:18:36.130939  487957 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1009 20:18:32.165289  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	W1009 20:18:34.665519  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	I1009 20:18:36.132180  487957 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 20:18:36.138224  487957 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 20:18:36.138249  487957 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 20:18:36.160571  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 20:18:36.604852  487957 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:18:36.605000  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:36.605072  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-565110 minikube.k8s.io/updated_at=2025_10_09T20_18_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb minikube.k8s.io/name=embed-certs-565110 minikube.k8s.io/primary=true
	I1009 20:18:36.829003  487957 ops.go:34] apiserver oom_adj: -16
	I1009 20:18:36.829202  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:37.330088  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:37.829429  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:38.330001  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:38.829263  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:39.329670  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:39.830065  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:40.330273  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:40.830174  487957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:18:40.965309  487957 kubeadm.go:1113] duration metric: took 4.360356736s to wait for elevateKubeSystemPrivileges
	I1009 20:18:40.965343  487957 kubeadm.go:402] duration metric: took 21.99538278s to StartCluster
	I1009 20:18:40.965362  487957 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:40.965441  487957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:18:40.966758  487957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:40.966988  487957 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:18:40.967078  487957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 20:18:40.967495  487957 config.go:182] Loaded profile config "embed-certs-565110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:18:40.967553  487957 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:18:40.967622  487957 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-565110"
	I1009 20:18:40.967637  487957 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-565110"
	I1009 20:18:40.967665  487957 host.go:66] Checking if "embed-certs-565110" exists ...
	I1009 20:18:40.968251  487957 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:18:40.968715  487957 addons.go:69] Setting default-storageclass=true in profile "embed-certs-565110"
	I1009 20:18:40.968736  487957 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-565110"
	I1009 20:18:40.969034  487957 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:18:40.972447  487957 out.go:179] * Verifying Kubernetes components...
	I1009 20:18:40.974031  487957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:41.031874  487957 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1009 20:18:37.163362  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	W1009 20:18:39.164606  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	W1009 20:18:41.663578  485563 pod_ready.go:104] pod "coredns-66bc5c9577-h7jz6" is not "Ready", error: <nil>
	I1009 20:18:41.033072  487957 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:41.033092  487957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:18:41.033176  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:41.040795  487957 addons.go:238] Setting addon default-storageclass=true in "embed-certs-565110"
	I1009 20:18:41.042855  487957 host.go:66] Checking if "embed-certs-565110" exists ...
	I1009 20:18:41.043324  487957 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:18:41.076337  487957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:18:41.101840  487957 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:41.101860  487957 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:18:41.101921  487957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:18:41.136315  487957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:18:41.464439  487957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:41.464668  487957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 20:18:41.481534  487957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:41.509167  487957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:42.036935  487957 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1009 20:18:42.039336  487957 node_ready.go:35] waiting up to 6m0s for node "embed-certs-565110" to be "Ready" ...
	I1009 20:18:42.357504  487957 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1009 20:18:43.670316  485563 pod_ready.go:94] pod "coredns-66bc5c9577-h7jz6" is "Ready"
	I1009 20:18:43.670341  485563 pod_ready.go:86] duration metric: took 33.012270711s for pod "coredns-66bc5c9577-h7jz6" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:43.673038  485563 pod_ready.go:83] waiting for pod "etcd-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:43.677784  485563 pod_ready.go:94] pod "etcd-no-preload-020313" is "Ready"
	I1009 20:18:43.677811  485563 pod_ready.go:86] duration metric: took 4.744738ms for pod "etcd-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:43.680394  485563 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:43.685734  485563 pod_ready.go:94] pod "kube-apiserver-no-preload-020313" is "Ready"
	I1009 20:18:43.685765  485563 pod_ready.go:86] duration metric: took 5.342851ms for pod "kube-apiserver-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:43.688292  485563 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:43.862366  485563 pod_ready.go:94] pod "kube-controller-manager-no-preload-020313" is "Ready"
	I1009 20:18:43.862391  485563 pod_ready.go:86] duration metric: took 174.071318ms for pod "kube-controller-manager-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:44.062443  485563 pod_ready.go:83] waiting for pod "kube-proxy-cd5v6" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:44.462266  485563 pod_ready.go:94] pod "kube-proxy-cd5v6" is "Ready"
	I1009 20:18:44.462301  485563 pod_ready.go:86] duration metric: took 399.823993ms for pod "kube-proxy-cd5v6" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:44.666593  485563 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:45.066392  485563 pod_ready.go:94] pod "kube-scheduler-no-preload-020313" is "Ready"
	I1009 20:18:45.066425  485563 pod_ready.go:86] duration metric: took 399.801486ms for pod "kube-scheduler-no-preload-020313" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:18:45.066441  485563 pod_ready.go:40] duration metric: took 34.412045642s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:18:45.206037  485563 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:18:45.207816  485563 out.go:179] * Done! kubectl is now configured to use "no-preload-020313" cluster and "default" namespace by default
	I1009 20:18:42.359111  487957 addons.go:514] duration metric: took 1.391539274s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1009 20:18:42.541565  487957 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-565110" context rescaled to 1 replicas
	W1009 20:18:44.042503  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	W1009 20:18:46.542625  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	W1009 20:18:49.042996  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	W1009 20:18:51.542630  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	W1009 20:18:54.042948  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	W1009 20:18:56.542867  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	W1009 20:18:58.543093  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	W1009 20:19:00.543628  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.265965617Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=278dd3de-934b-4f0c-a59b-00eae0cb7467 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.26916672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.273596533Z" level=info msg="Removed container 0afd5a0006249b39174c994b4216b2b676030afc2da532d4d2ccbb7240b6bcf7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtq85/dashboard-metrics-scraper" id=83e2b269-5f19-48c3-ba7e-928832c2d801 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.278774918Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.278957017Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9f395e64ed66d6e2d11c1915a5efb55d9226364d810c88098bde381ca110af5a/merged/etc/passwd: no such file or directory"
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.278977858Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9f395e64ed66d6e2d11c1915a5efb55d9226364d810c88098bde381ca110af5a/merged/etc/group: no such file or directory"
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.279224696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.294515837Z" level=info msg="Created container 3d32dbce2cc613f987b4189a71d62e455a6390b309ef522069aa954c7269e07b: kube-system/storage-provisioner/storage-provisioner" id=278dd3de-934b-4f0c-a59b-00eae0cb7467 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.295378642Z" level=info msg="Starting container: 3d32dbce2cc613f987b4189a71d62e455a6390b309ef522069aa954c7269e07b" id=424fb86c-9d47-41c3-89de-e156049716dd name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:18:39 no-preload-020313 crio[653]: time="2025-10-09T20:18:39.297867161Z" level=info msg="Started container" PID=1626 containerID=3d32dbce2cc613f987b4189a71d62e455a6390b309ef522069aa954c7269e07b description=kube-system/storage-provisioner/storage-provisioner id=424fb86c-9d47-41c3-89de-e156049716dd name=/runtime.v1.RuntimeService/StartContainer sandboxID=5a73b1409a90f548928d13b2b2697b3cc601605b508a4af6d1ac3ad1055bea9c
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.917633485Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.925090145Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.925307264Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.925390489Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.928997037Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.929198632Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.929236114Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.932465376Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.932499806Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.932530395Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.935736789Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.935772285Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.93579695Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.939332728Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:18:48 no-preload-020313 crio[653]: time="2025-10-09T20:18:48.93936989Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	3d32dbce2cc61       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           23 seconds ago       Running             storage-provisioner         2                   5a73b1409a90f       storage-provisioner                          kube-system
	874aa1307bd23       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   6cf35c67c122a       dashboard-metrics-scraper-6ffb444bf9-dtq85   kubernetes-dashboard
	fe7a54433b350       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago       Running             kubernetes-dashboard        0                   ac87e7f1e7b02       kubernetes-dashboard-855c9754f9-46jtk        kubernetes-dashboard
	d6b7ee85aeefa       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   6d07cf22449bc       coredns-66bc5c9577-h7jz6                     kube-system
	30db86e88976f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   b7748c0a4538a       busybox                                      default
	cfac8e5ac3da2       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           54 seconds ago       Exited              storage-provisioner         1                   5a73b1409a90f       storage-provisioner                          kube-system
	0442d50e4e396       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   59c2a8e5c6b1d       kube-proxy-cd5v6                             kube-system
	042d3009a6505       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   d080dbf8da035       kindnet-47kwl                                kube-system
	22b87e577d7a8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   5be5222eb7463       kube-controller-manager-no-preload-020313    kube-system
	5abd9717aed8a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   561cfcce190ce       etcd-no-preload-020313                       kube-system
	bdcbfecca01ea       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   6edc536f3c664       kube-scheduler-no-preload-020313             kube-system
	d49e0cc690dca       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   9cdbf2be3627c       kube-apiserver-no-preload-020313             kube-system
	
	
	==> coredns [d6b7ee85aeefababe2c083f6e0a8cd0dc31cd7c5844cb95bf3b217fc2272910f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46041 - 48469 "HINFO IN 630548794168358172.7858796566592122350. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016536469s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-020313
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-020313
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=no-preload-020313
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T20_17_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 20:16:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-020313
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 20:18:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 20:18:48 +0000   Thu, 09 Oct 2025 20:16:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 20:18:48 +0000   Thu, 09 Oct 2025 20:16:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 20:18:48 +0000   Thu, 09 Oct 2025 20:16:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 20:18:48 +0000   Thu, 09 Oct 2025 20:17:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-020313
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 4a34688d9b034718ad38693aacdec85a
	  System UUID:                a3d84e5d-68ba-4d89-bdca-3ce490a9cb49
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-h7jz6                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     115s
	  kube-system                 etcd-no-preload-020313                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-47kwl                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-no-preload-020313              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-no-preload-020313     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-cd5v6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-no-preload-020313              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dtq85    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-46jtk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 113s                   kube-proxy       
	  Normal   Starting                 52s                    kube-proxy       
	  Normal   Starting                 2m14s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m14s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m14s (x8 over 2m14s)  kubelet          Node no-preload-020313 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m14s (x8 over 2m14s)  kubelet          Node no-preload-020313 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m14s (x8 over 2m14s)  kubelet          Node no-preload-020313 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m1s                   kubelet          Node no-preload-020313 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m1s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m1s                   kubelet          Node no-preload-020313 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m1s                   kubelet          Node no-preload-020313 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m1s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           116s                   node-controller  Node no-preload-020313 event: Registered Node no-preload-020313 in Controller
	  Normal   NodeReady                100s                   kubelet          Node no-preload-020313 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 64s)      kubelet          Node no-preload-020313 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 64s)      kubelet          Node no-preload-020313 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 64s)      kubelet          Node no-preload-020313 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node no-preload-020313 event: Registered Node no-preload-020313 in Controller
	
	
	==> dmesg <==
	[Oct 9 19:47] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:50] overlayfs: idmapped layers are currently not supported
	[ +27.967875] overlayfs: idmapped layers are currently not supported
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:04] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:15] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:16] overlayfs: idmapped layers are currently not supported
	[ +23.810739] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:18] overlayfs: idmapped layers are currently not supported
	[ +26.082927] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5abd9717aed8a5baaa24ce4dbac3f6a6652f3d3b84cb43dc09007beee7a84423] <==
	{"level":"warn","ts":"2025-10-09T20:18:04.895268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:04.933903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.011900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.047277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.072152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.133766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.180070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.215590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.255470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.314593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.365227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.400526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.497021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.625171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.708324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.775451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.841992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.887395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.953958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:05.992186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:06.060651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:06.108439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:06.130302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:06.140455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:06.267014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52378","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:19:03 up  3:01,  0 user,  load average: 4.30, 2.55, 1.92
	Linux no-preload-020313 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [042d3009a6505b38db3a5645a55f6992d1b6ef9254086f64eef6f0621cff64c8] <==
	I1009 20:18:08.557660       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:18:08.557875       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 20:18:08.558028       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:18:08.558042       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:18:08.558057       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:18:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:18:08.916104       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:18:08.920271       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:18:08.920300       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:18:08.920868       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 20:18:38.930697       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 20:18:38.930697       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1009 20:18:38.930795       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 20:18:38.930871       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1009 20:18:40.620514       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 20:18:40.620546       1 metrics.go:72] Registering metrics
	I1009 20:18:40.620631       1 controller.go:711] "Syncing nftables rules"
	I1009 20:18:48.916600       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 20:18:48.916736       1 main.go:301] handling current node
	I1009 20:18:58.924621       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 20:18:58.924661       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d49e0cc690dcab668ca06327548e322b4d012301c7ad96444959726efbca4e09] <==
	I1009 20:18:07.524509       1 policy_source.go:240] refreshing policies
	I1009 20:18:07.546369       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 20:18:07.561189       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1009 20:18:07.565049       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 20:18:07.573358       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1009 20:18:07.573414       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 20:18:07.601252       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 20:18:07.601332       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 20:18:07.607109       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1009 20:18:07.607316       1 aggregator.go:171] initial CRD sync complete...
	I1009 20:18:07.607328       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 20:18:07.607335       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 20:18:07.607340       1 cache.go:39] Caches are synced for autoregister controller
	I1009 20:18:07.613048       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 20:18:07.879882       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 20:18:08.245542       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:18:08.908702       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 20:18:09.432194       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 20:18:09.554545       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:18:09.631975       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:18:09.922350       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.69.145"}
	I1009 20:18:09.982279       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.89.231"}
	I1009 20:18:12.183157       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 20:18:12.285150       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 20:18:12.404932       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [22b87e577d7a8f108e7d77d095e44d5b3392e21fb7da8260fe838b3e930b2229] <==
	I1009 20:18:11.801844       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 20:18:11.805150       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1009 20:18:11.809027       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:18:11.809053       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 20:18:11.809062       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 20:18:11.814149       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 20:18:11.815405       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1009 20:18:11.818678       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1009 20:18:11.820961       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 20:18:11.823951       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:18:11.824120       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 20:18:11.827370       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 20:18:11.827492       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 20:18:11.831200       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1009 20:18:11.834679       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1009 20:18:11.835936       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 20:18:11.839129       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1009 20:18:11.839247       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 20:18:11.839367       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-020313"
	I1009 20:18:11.839427       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1009 20:18:11.840645       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 20:18:11.846071       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 20:18:11.851607       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 20:18:12.431989       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1009 20:18:12.433733       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [0442d50e4e3961eb21b5a12dda29ff9aea11f015d76a75f5fc6d85fbecaab975] <==
	I1009 20:18:09.833233       1 server_linux.go:53] "Using iptables proxy"
	I1009 20:18:10.209573       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 20:18:10.310689       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 20:18:10.310720       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 20:18:10.310787       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:18:10.348382       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:18:10.348712       1 server_linux.go:132] "Using iptables Proxier"
	I1009 20:18:10.353726       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:18:10.354143       1 server.go:527] "Version info" version="v1.34.1"
	I1009 20:18:10.354337       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:18:10.355998       1 config.go:200] "Starting service config controller"
	I1009 20:18:10.356063       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 20:18:10.356116       1 config.go:106] "Starting endpoint slice config controller"
	I1009 20:18:10.356145       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 20:18:10.356181       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 20:18:10.356207       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 20:18:10.356948       1 config.go:309] "Starting node config controller"
	I1009 20:18:10.357011       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 20:18:10.357041       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 20:18:10.457544       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 20:18:10.459152       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 20:18:10.459192       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bdcbfecca01ea6e3e0ee392800df2ec67f04ed687955da27cce3925008d3bc5a] <==
	I1009 20:18:05.045408       1 serving.go:386] Generated self-signed cert in-memory
	I1009 20:18:07.898243       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 20:18:07.902923       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:18:07.979508       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 20:18:07.979793       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 20:18:07.979861       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 20:18:07.979910       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 20:18:07.987621       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:18:07.987722       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:18:07.987771       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:18:07.990173       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:18:08.080523       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 20:18:08.090088       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:18:08.090416       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 09 20:18:13 no-preload-020313 kubelet[774]: E1009 20:18:13.550055     774 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ffc02df7-a011-4dff-a92d-b4705e05953c-kube-api-access-p2jkz podName:ffc02df7-a011-4dff-a92d-b4705e05953c nodeName:}" failed. No retries permitted until 2025-10-09 20:18:14.050028025 +0000 UTC m=+14.504640970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p2jkz" (UniqueName: "kubernetes.io/projected/ffc02df7-a011-4dff-a92d-b4705e05953c-kube-api-access-p2jkz") pod "kubernetes-dashboard-855c9754f9-46jtk" (UID: "ffc02df7-a011-4dff-a92d-b4705e05953c") : failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:18:13 no-preload-020313 kubelet[774]: E1009 20:18:13.554514     774 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:18:13 no-preload-020313 kubelet[774]: E1009 20:18:13.554566     774 projected.go:196] Error preparing data for projected volume kube-api-access-4tdzg for pod kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtq85: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:18:13 no-preload-020313 kubelet[774]: E1009 20:18:13.554641     774 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/31bd6e7a-7a5f-4056-accd-4f55dfce30df-kube-api-access-4tdzg podName:31bd6e7a-7a5f-4056-accd-4f55dfce30df nodeName:}" failed. No retries permitted until 2025-10-09 20:18:14.054620614 +0000 UTC m=+14.509233559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4tdzg" (UniqueName: "kubernetes.io/projected/31bd6e7a-7a5f-4056-accd-4f55dfce30df-kube-api-access-4tdzg") pod "dashboard-metrics-scraper-6ffb444bf9-dtq85" (UID: "31bd6e7a-7a5f-4056-accd-4f55dfce30df") : failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:18:14 no-preload-020313 kubelet[774]: W1009 20:18:14.255398     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861/crio-6cf35c67c122aa3e69dfd6153523853775a7a48888fd8de0fdf846d0caf2bbe6 WatchSource:0}: Error finding container 6cf35c67c122aa3e69dfd6153523853775a7a48888fd8de0fdf846d0caf2bbe6: Status 404 returned error can't find the container with id 6cf35c67c122aa3e69dfd6153523853775a7a48888fd8de0fdf846d0caf2bbe6
	Oct 09 20:18:14 no-preload-020313 kubelet[774]: W1009 20:18:14.271735     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5f4dc51ee851ef6c368b3e8adfe4e5921c2b1bdc3199a9c54c6ccf58afab3861/crio-ac87e7f1e7b026c6a28c236052d46cc31752aa4610e062e9cf2c31fb01133e3d WatchSource:0}: Error finding container ac87e7f1e7b026c6a28c236052d46cc31752aa4610e062e9cf2c31fb01133e3d: Status 404 returned error can't find the container with id ac87e7f1e7b026c6a28c236052d46cc31752aa4610e062e9cf2c31fb01133e3d
	Oct 09 20:18:21 no-preload-020313 kubelet[774]: I1009 20:18:21.190435     774 scope.go:117] "RemoveContainer" containerID="7b824854b72b754ea7bde958fa635d4205a77467a0847c96276698eeb17623b4"
	Oct 09 20:18:22 no-preload-020313 kubelet[774]: I1009 20:18:22.199442     774 scope.go:117] "RemoveContainer" containerID="7b824854b72b754ea7bde958fa635d4205a77467a0847c96276698eeb17623b4"
	Oct 09 20:18:22 no-preload-020313 kubelet[774]: I1009 20:18:22.200047     774 scope.go:117] "RemoveContainer" containerID="0afd5a0006249b39174c994b4216b2b676030afc2da532d4d2ccbb7240b6bcf7"
	Oct 09 20:18:22 no-preload-020313 kubelet[774]: E1009 20:18:22.204830     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtq85_kubernetes-dashboard(31bd6e7a-7a5f-4056-accd-4f55dfce30df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtq85" podUID="31bd6e7a-7a5f-4056-accd-4f55dfce30df"
	Oct 09 20:18:23 no-preload-020313 kubelet[774]: I1009 20:18:23.203819     774 scope.go:117] "RemoveContainer" containerID="0afd5a0006249b39174c994b4216b2b676030afc2da532d4d2ccbb7240b6bcf7"
	Oct 09 20:18:23 no-preload-020313 kubelet[774]: E1009 20:18:23.204130     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtq85_kubernetes-dashboard(31bd6e7a-7a5f-4056-accd-4f55dfce30df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtq85" podUID="31bd6e7a-7a5f-4056-accd-4f55dfce30df"
	Oct 09 20:18:24 no-preload-020313 kubelet[774]: I1009 20:18:24.206297     774 scope.go:117] "RemoveContainer" containerID="0afd5a0006249b39174c994b4216b2b676030afc2da532d4d2ccbb7240b6bcf7"
	Oct 09 20:18:24 no-preload-020313 kubelet[774]: E1009 20:18:24.206465     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtq85_kubernetes-dashboard(31bd6e7a-7a5f-4056-accd-4f55dfce30df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtq85" podUID="31bd6e7a-7a5f-4056-accd-4f55dfce30df"
	Oct 09 20:18:38 no-preload-020313 kubelet[774]: I1009 20:18:38.955605     774 scope.go:117] "RemoveContainer" containerID="0afd5a0006249b39174c994b4216b2b676030afc2da532d4d2ccbb7240b6bcf7"
	Oct 09 20:18:39 no-preload-020313 kubelet[774]: I1009 20:18:39.247875     774 scope.go:117] "RemoveContainer" containerID="0afd5a0006249b39174c994b4216b2b676030afc2da532d4d2ccbb7240b6bcf7"
	Oct 09 20:18:39 no-preload-020313 kubelet[774]: I1009 20:18:39.248183     774 scope.go:117] "RemoveContainer" containerID="874aa1307bd23de55196930ba25ea04fa85d47795dbd099fb33715b82b0ca793"
	Oct 09 20:18:39 no-preload-020313 kubelet[774]: E1009 20:18:39.248345     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtq85_kubernetes-dashboard(31bd6e7a-7a5f-4056-accd-4f55dfce30df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtq85" podUID="31bd6e7a-7a5f-4056-accd-4f55dfce30df"
	Oct 09 20:18:39 no-preload-020313 kubelet[774]: I1009 20:18:39.258947     774 scope.go:117] "RemoveContainer" containerID="cfac8e5ac3da24e22eb9c6cef2647c4b3078ab69fc092c7b1a73d4bc627d2f52"
	Oct 09 20:18:39 no-preload-020313 kubelet[774]: I1009 20:18:39.277624     774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-46jtk" podStartSLOduration=13.167354224 podStartE2EDuration="27.277605138s" podCreationTimestamp="2025-10-09 20:18:12 +0000 UTC" firstStartedPulling="2025-10-09 20:18:14.277329532 +0000 UTC m=+14.731942469" lastFinishedPulling="2025-10-09 20:18:28.387580446 +0000 UTC m=+28.842193383" observedRunningTime="2025-10-09 20:18:29.246684578 +0000 UTC m=+29.701297514" watchObservedRunningTime="2025-10-09 20:18:39.277605138 +0000 UTC m=+39.732218083"
	Oct 09 20:18:44 no-preload-020313 kubelet[774]: I1009 20:18:44.193190     774 scope.go:117] "RemoveContainer" containerID="874aa1307bd23de55196930ba25ea04fa85d47795dbd099fb33715b82b0ca793"
	Oct 09 20:18:44 no-preload-020313 kubelet[774]: E1009 20:18:44.193889     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dtq85_kubernetes-dashboard(31bd6e7a-7a5f-4056-accd-4f55dfce30df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dtq85" podUID="31bd6e7a-7a5f-4056-accd-4f55dfce30df"
	Oct 09 20:18:57 no-preload-020313 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 20:18:57 no-preload-020313 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 20:18:57 no-preload-020313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [fe7a54433b35090ac26deba9c9b4e3b51e7532d0b41a463ca4aa4968c8781c7f] <==
	2025/10/09 20:18:28 Starting overwatch
	2025/10/09 20:18:28 Using namespace: kubernetes-dashboard
	2025/10/09 20:18:28 Using in-cluster config to connect to apiserver
	2025/10/09 20:18:28 Using secret token for csrf signing
	2025/10/09 20:18:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/09 20:18:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/09 20:18:28 Successful initial request to the apiserver, version: v1.34.1
	2025/10/09 20:18:28 Generating JWE encryption key
	2025/10/09 20:18:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/09 20:18:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/09 20:18:29 Initializing JWE encryption key from synchronized object
	2025/10/09 20:18:29 Creating in-cluster Sidecar client
	2025/10/09 20:18:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 20:18:29 Serving insecurely on HTTP port: 9090
	2025/10/09 20:18:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3d32dbce2cc613f987b4189a71d62e455a6390b309ef522069aa954c7269e07b] <==
	I1009 20:18:39.322187       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:18:39.335878       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:18:39.335938       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 20:18:39.341047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:18:42.796537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:18:47.057075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:18:50.656151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:18:53.710703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:18:56.732722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:18:56.738314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:18:56.738504       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:18:56.738904       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2df7659d-a29d-4122-8f28-18add9557e18", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-020313_e5f27ae7-2a2d-4dc1-a83b-d251f668aa62 became leader
	I1009 20:18:56.738989       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-020313_e5f27ae7-2a2d-4dc1-a83b-d251f668aa62!
	W1009 20:18:56.743729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:18:56.757602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:18:56.839203       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-020313_e5f27ae7-2a2d-4dc1-a83b-d251f668aa62!
	W1009 20:18:58.762084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:18:58.767373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:00.774917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:00.781013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:02.784942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:02.790617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cfac8e5ac3da24e22eb9c6cef2647c4b3078ab69fc092c7b1a73d4bc627d2f52] <==
	I1009 20:18:08.998705       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 20:18:39.179933       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-020313 -n no-preload-020313
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-020313 -n no-preload-020313: exit status 2 (379.171392ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-020313 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-565110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-565110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (312.365246ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:19:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-565110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-565110 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-565110 describe deploy/metrics-server -n kube-system: exit status 1 (106.186605ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-565110 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-565110
helpers_test.go:243: (dbg) docker inspect embed-certs-565110:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85",
	        "Created": "2025-10-09T20:18:08.202138688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 488655,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:18:08.283556757Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85/hostname",
	        "HostsPath": "/var/lib/docker/containers/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85/hosts",
	        "LogPath": "/var/lib/docker/containers/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85-json.log",
	        "Name": "/embed-certs-565110",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-565110:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-565110",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85",
	                "LowerDir": "/var/lib/docker/overlay2/1f20732bafca7c4ec6bbe75518ab73ef01fcee46e54c892cfb75e2f68114dce6-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f20732bafca7c4ec6bbe75518ab73ef01fcee46e54c892cfb75e2f68114dce6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f20732bafca7c4ec6bbe75518ab73ef01fcee46e54c892cfb75e2f68114dce6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f20732bafca7c4ec6bbe75518ab73ef01fcee46e54c892cfb75e2f68114dce6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-565110",
	                "Source": "/var/lib/docker/volumes/embed-certs-565110/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-565110",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-565110",
	                "name.minikube.sigs.k8s.io": "embed-certs-565110",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4d193ba0f2533d533c52f50d3ac6c88656b2d29637e6bcaba82d0eb57a2f5242",
	            "SandboxKey": "/var/run/docker/netns/4d193ba0f253",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-565110": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:8e:8b:53:1f:26",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c39245925c93cf03ed8abe3702c98fe11aa5fe2a748150abd863ee2a4578bafb",
	                    "EndpointID": "0313407dd3d94a31207914983164aef19a7585bed91be1509d9b1f373b44b5d9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-565110",
	                        "5db0c011c608"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-565110 -n embed-certs-565110
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-565110 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-565110 logs -n 25: (1.580486979s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-038875                                                                                                                                                                                                                        │ cert-options-038875          │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:15 UTC │
	│ start   │ -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p cert-expiration-282540 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-282540       │ jenkins │ v1.37.0 │ 09 Oct 25 20:15 UTC │ 09 Oct 25 20:16 UTC │
	│ delete  │ -p cert-expiration-282540                                                                                                                                                                                                                     │ cert-expiration-282540       │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:17 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-670649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │                     │
	│ stop    │ -p old-k8s-version-670649 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-670649 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-020313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	│ stop    │ -p no-preload-020313 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ image   │ old-k8s-version-670649 image list --format=json                                                                                                                                                                                               │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ pause   │ -p old-k8s-version-670649 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-020313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ start   │ -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:18 UTC │
	│ delete  │ -p old-k8s-version-670649                                                                                                                                                                                                                     │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ delete  │ -p old-k8s-version-670649                                                                                                                                                                                                                     │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:18 UTC │
	│ start   │ -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:19 UTC │
	│ image   │ no-preload-020313 image list --format=json                                                                                                                                                                                                    │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:18 UTC │
	│ pause   │ -p no-preload-020313 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │                     │
	│ delete  │ -p no-preload-020313                                                                                                                                                                                                                          │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ delete  │ -p no-preload-020313                                                                                                                                                                                                                          │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ delete  │ -p disable-driver-mounts-613966                                                                                                                                                                                                               │ disable-driver-mounts-613966 │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ start   │ -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-565110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:19:07
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:19:07.162562  492745 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:19:07.162847  492745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:19:07.162861  492745 out.go:374] Setting ErrFile to fd 2...
	I1009 20:19:07.162868  492745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:19:07.163147  492745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:19:07.163645  492745 out.go:368] Setting JSON to false
	I1009 20:19:07.165576  492745 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10887,"bootTime":1760030261,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:19:07.165695  492745 start.go:143] virtualization:  
	I1009 20:19:07.169880  492745 out.go:179] * [default-k8s-diff-port-417984] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:19:07.173103  492745 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:19:07.173294  492745 notify.go:221] Checking for updates...
	I1009 20:19:07.177303  492745 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:19:07.180707  492745 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:19:07.183783  492745 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:19:07.186813  492745 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:19:07.189781  492745 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:19:07.193297  492745 config.go:182] Loaded profile config "embed-certs-565110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:19:07.193457  492745 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:19:07.226190  492745 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:19:07.226331  492745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:19:07.282457  492745 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:19:07.273215146 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:19:07.282569  492745 docker.go:319] overlay module found
	I1009 20:19:07.285759  492745 out.go:179] * Using the docker driver based on user configuration
	I1009 20:19:07.288553  492745 start.go:309] selected driver: docker
	I1009 20:19:07.288568  492745 start.go:930] validating driver "docker" against <nil>
	I1009 20:19:07.288582  492745 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:19:07.289437  492745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:19:07.342487  492745 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:19:07.333693547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:19:07.342649  492745 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 20:19:07.342879  492745 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:19:07.345730  492745 out.go:179] * Using Docker driver with root privileges
	I1009 20:19:07.348521  492745 cni.go:84] Creating CNI manager for ""
	I1009 20:19:07.348592  492745 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:19:07.348606  492745 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 20:19:07.348689  492745 start.go:353] cluster config:
	{Name:default-k8s-diff-port-417984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:19:07.351845  492745 out.go:179] * Starting "default-k8s-diff-port-417984" primary control-plane node in "default-k8s-diff-port-417984" cluster
	I1009 20:19:07.354667  492745 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:19:07.357685  492745 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:19:07.360468  492745 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:19:07.360524  492745 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 20:19:07.360536  492745 cache.go:58] Caching tarball of preloaded images
	I1009 20:19:07.360583  492745 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:19:07.360623  492745 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 20:19:07.360633  492745 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 20:19:07.360744  492745 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/config.json ...
	I1009 20:19:07.360764  492745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/config.json: {Name:mk04373432094f298f763ee898d5bd7b27092d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:07.382046  492745 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:19:07.382069  492745 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:19:07.382087  492745 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:19:07.382111  492745 start.go:361] acquireMachinesLock for default-k8s-diff-port-417984: {Name:mkbd5a4da97eed81f337e01b5ed29c5c6848874d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:19:07.382213  492745 start.go:365] duration metric: took 82.61µs to acquireMachinesLock for "default-k8s-diff-port-417984"
	I1009 20:19:07.382244  492745 start.go:94] Provisioning new machine with config: &{Name:default-k8s-diff-port-417984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417984 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:19:07.382316  492745 start.go:126] createHost starting for "" (driver="docker")
	W1009 20:19:08.042400  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	W1009 20:19:10.042900  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	I1009 20:19:07.385740  492745 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 20:19:07.385981  492745 start.go:160] libmachine.API.Create for "default-k8s-diff-port-417984" (driver="docker")
	I1009 20:19:07.386026  492745 client.go:168] LocalClient.Create starting
	I1009 20:19:07.386095  492745 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem
	I1009 20:19:07.386136  492745 main.go:141] libmachine: Decoding PEM data...
	I1009 20:19:07.386154  492745 main.go:141] libmachine: Parsing certificate...
	I1009 20:19:07.386215  492745 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem
	I1009 20:19:07.386237  492745 main.go:141] libmachine: Decoding PEM data...
	I1009 20:19:07.386247  492745 main.go:141] libmachine: Parsing certificate...
	I1009 20:19:07.386614  492745 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-417984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 20:19:07.403471  492745 cli_runner.go:211] docker network inspect default-k8s-diff-port-417984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 20:19:07.403601  492745 network_create.go:284] running [docker network inspect default-k8s-diff-port-417984] to gather additional debugging logs...
	I1009 20:19:07.403627  492745 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-417984
	W1009 20:19:07.419377  492745 cli_runner.go:211] docker network inspect default-k8s-diff-port-417984 returned with exit code 1
	I1009 20:19:07.419412  492745 network_create.go:287] error running [docker network inspect default-k8s-diff-port-417984]: docker network inspect default-k8s-diff-port-417984: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-417984 not found
	I1009 20:19:07.419425  492745 network_create.go:289] output of [docker network inspect default-k8s-diff-port-417984]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-417984 not found
	
	** /stderr **
	I1009 20:19:07.419531  492745 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:19:07.435764  492745 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3847a6577684 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:b5:e6:7d:c7:ad} reservation:<nil>}
	I1009 20:19:07.436211  492745 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5742e12e0dad IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9e:82:91:fd:a6:fb} reservation:<nil>}
	I1009 20:19:07.436446  492745 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-11b099636187 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:bb:e5:1b:6d:a2} reservation:<nil>}
	I1009 20:19:07.437411  492745 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c39245925c93 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ee:ec:b7:bd:5b:81} reservation:<nil>}
	I1009 20:19:07.437875  492745 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019fc580}
	I1009 20:19:07.437893  492745 network_create.go:124] attempt to create docker network default-k8s-diff-port-417984 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1009 20:19:07.437949  492745 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-417984 default-k8s-diff-port-417984
	I1009 20:19:07.496405  492745 network_create.go:108] docker network default-k8s-diff-port-417984 192.168.85.0/24 created
	I1009 20:19:07.496438  492745 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-417984" container
	I1009 20:19:07.496508  492745 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 20:19:07.513914  492745 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-417984 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-417984 --label created_by.minikube.sigs.k8s.io=true
	I1009 20:19:07.532056  492745 oci.go:103] Successfully created a docker volume default-k8s-diff-port-417984
	I1009 20:19:07.532144  492745 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-417984-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-417984 --entrypoint /usr/bin/test -v default-k8s-diff-port-417984:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 20:19:08.115736  492745 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-417984
	I1009 20:19:08.115808  492745 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:19:08.115831  492745 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 20:19:08.115905  492745 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-417984:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	W1009 20:19:12.043204  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	W1009 20:19:14.043273  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	I1009 20:19:12.790778  492745 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-417984:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.674832704s)
	I1009 20:19:12.790816  492745 kic.go:203] duration metric: took 4.674981383s to extract preloaded images to volume ...
	W1009 20:19:12.790955  492745 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 20:19:12.791075  492745 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 20:19:12.852787  492745 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-417984 --name default-k8s-diff-port-417984 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-417984 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-417984 --network default-k8s-diff-port-417984 --ip 192.168.85.2 --volume default-k8s-diff-port-417984:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 20:19:13.175732  492745 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417984 --format={{.State.Running}}
	I1009 20:19:13.199491  492745 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417984 --format={{.State.Status}}
	I1009 20:19:13.223316  492745 cli_runner.go:164] Run: docker exec default-k8s-diff-port-417984 stat /var/lib/dpkg/alternatives/iptables
	I1009 20:19:13.285206  492745 oci.go:144] the created container "default-k8s-diff-port-417984" has a running status.
	I1009 20:19:13.285234  492745 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa...
	I1009 20:19:13.383939  492745 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 20:19:13.406299  492745 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417984 --format={{.State.Status}}
	I1009 20:19:13.431119  492745 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 20:19:13.431140  492745 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-417984 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 20:19:13.492614  492745 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417984 --format={{.State.Status}}
	I1009 20:19:13.516288  492745 machine.go:93] provisionDockerMachine start ...
	I1009 20:19:13.516383  492745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:19:13.539585  492745 main.go:141] libmachine: Using SSH client type: native
	I1009 20:19:13.540035  492745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1009 20:19:13.540049  492745 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:19:13.540795  492745 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 20:19:16.688655  492745 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-417984
	
	I1009 20:19:16.688682  492745 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-417984"
	I1009 20:19:16.688750  492745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:19:16.711179  492745 main.go:141] libmachine: Using SSH client type: native
	I1009 20:19:16.711499  492745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1009 20:19:16.711530  492745 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-417984 && echo "default-k8s-diff-port-417984" | sudo tee /etc/hostname
	I1009 20:19:16.867588  492745 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-417984
	
	I1009 20:19:16.867665  492745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:19:16.885669  492745 main.go:141] libmachine: Using SSH client type: native
	I1009 20:19:16.885971  492745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1009 20:19:16.885994  492745 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-417984' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-417984/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-417984' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:19:17.034039  492745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:19:17.034067  492745 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:19:17.034094  492745 ubuntu.go:190] setting up certificates
	I1009 20:19:17.034106  492745 provision.go:84] configureAuth start
	I1009 20:19:17.034179  492745 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-417984
	I1009 20:19:17.055692  492745 provision.go:143] copyHostCerts
	I1009 20:19:17.055774  492745 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:19:17.055784  492745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:19:17.055861  492745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:19:17.056314  492745 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:19:17.056329  492745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:19:17.056371  492745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:19:17.056440  492745 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:19:17.056445  492745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:19:17.056470  492745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:19:17.056517  492745 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-417984 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-417984 localhost minikube]
	I1009 20:19:17.152035  492745 provision.go:177] copyRemoteCerts
	I1009 20:19:17.152136  492745 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:19:17.152296  492745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:19:17.170630  492745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:19:17.273016  492745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:19:17.291688  492745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1009 20:19:17.309847  492745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:19:17.329316  492745 provision.go:87] duration metric: took 295.193572ms to configureAuth
	I1009 20:19:17.329348  492745 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:19:17.329539  492745 config.go:182] Loaded profile config "default-k8s-diff-port-417984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:19:17.329652  492745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:19:17.346616  492745 main.go:141] libmachine: Using SSH client type: native
	I1009 20:19:17.346941  492745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1009 20:19:17.346961  492745 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:19:17.679062  492745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:19:17.679083  492745 machine.go:96] duration metric: took 4.162774672s to provisionDockerMachine
	I1009 20:19:17.679092  492745 client.go:171] duration metric: took 10.293055356s to LocalClient.Create
	I1009 20:19:17.679111  492745 start.go:168] duration metric: took 10.293131485s to libmachine.API.Create "default-k8s-diff-port-417984"
	I1009 20:19:17.679119  492745 start.go:294] postStartSetup for "default-k8s-diff-port-417984" (driver="docker")
	I1009 20:19:17.679128  492745 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:19:17.679218  492745 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:19:17.679283  492745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:19:17.697426  492745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:19:17.801949  492745 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:19:17.805337  492745 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:19:17.805371  492745 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:19:17.805385  492745 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:19:17.805441  492745 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:19:17.805535  492745 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:19:17.805646  492745 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:19:17.813665  492745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:19:17.831907  492745 start.go:297] duration metric: took 152.772462ms for postStartSetup
	I1009 20:19:17.832299  492745 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-417984
	I1009 20:19:17.849344  492745 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/config.json ...
	I1009 20:19:17.849634  492745 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:19:17.849683  492745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:19:17.866396  492745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:19:17.966703  492745 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:19:17.971531  492745 start.go:129] duration metric: took 10.589196379s to createHost
	I1009 20:19:17.971553  492745 start.go:84] releasing machines lock for "default-k8s-diff-port-417984", held for 10.589327088s
	I1009 20:19:17.971624  492745 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-417984
	I1009 20:19:17.988627  492745 ssh_runner.go:195] Run: cat /version.json
	I1009 20:19:17.988688  492745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:19:17.988702  492745 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:19:17.988772  492745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:19:18.007361  492745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:19:18.015117  492745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:19:18.207102  492745 ssh_runner.go:195] Run: systemctl --version
	I1009 20:19:18.213659  492745 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:19:18.250259  492745 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:19:18.254675  492745 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:19:18.254746  492745 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:19:18.284103  492745 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 20:19:18.284124  492745 start.go:496] detecting cgroup driver to use...
	I1009 20:19:18.284155  492745 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 20:19:18.284202  492745 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:19:18.300728  492745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:19:18.313449  492745 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:19:18.313515  492745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:19:18.331430  492745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:19:18.351120  492745 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:19:18.476608  492745 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:19:18.605428  492745 docker.go:234] disabling docker service ...
	I1009 20:19:18.605538  492745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:19:18.626848  492745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:19:18.640259  492745 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:19:18.782705  492745 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:19:18.906139  492745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:19:18.919672  492745 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:19:18.934432  492745 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:19:18.934518  492745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:18.943672  492745 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:19:18.943749  492745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:18.952556  492745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:18.961265  492745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:18.970398  492745 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:19:18.978616  492745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:18.987625  492745 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:19.002750  492745 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:19.015065  492745 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:19:19.023421  492745 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:19:19.031088  492745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:19:19.154273  492745 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:19:19.286053  492745 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:19:19.286196  492745 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:19:19.290661  492745 start.go:564] Will wait 60s for crictl version
	I1009 20:19:19.290748  492745 ssh_runner.go:195] Run: which crictl
	I1009 20:19:19.294368  492745 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:19:19.330361  492745 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:19:19.330505  492745 ssh_runner.go:195] Run: crio --version
	I1009 20:19:19.360600  492745 ssh_runner.go:195] Run: crio --version
	I1009 20:19:19.393493  492745 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1009 20:19:16.043477  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	W1009 20:19:18.044003  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	W1009 20:19:20.044079  487957 node_ready.go:57] node "embed-certs-565110" has "Ready":"False" status (will retry)
	I1009 20:19:19.396331  492745 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-417984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:19:19.414211  492745 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 20:19:19.418284  492745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:19:19.428390  492745 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-417984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417984 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:19:19.428508  492745 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:19:19.428584  492745 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:19:19.461729  492745 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:19:19.461753  492745 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:19:19.461807  492745 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:19:19.487622  492745 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:19:19.487645  492745 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:19:19.487652  492745 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1009 20:19:19.487790  492745 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-417984 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:19:19.487889  492745 ssh_runner.go:195] Run: crio config
	I1009 20:19:19.569233  492745 cni.go:84] Creating CNI manager for ""
	I1009 20:19:19.569260  492745 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:19:19.569276  492745 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 20:19:19.569300  492745 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-417984 NodeName:default-k8s-diff-port-417984 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:19:19.569437  492745 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-417984"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:19:19.569524  492745 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:19:19.577908  492745 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:19:19.578007  492745 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:19:19.586287  492745 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1009 20:19:19.601607  492745 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:19:19.616244  492745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1009 20:19:19.630364  492745 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:19:19.634131  492745 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:19:19.644633  492745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:19:19.779753  492745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:19:19.800104  492745 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984 for IP: 192.168.85.2
	I1009 20:19:19.800178  492745 certs.go:195] generating shared ca certs ...
	I1009 20:19:19.800210  492745 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:19.800406  492745 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:19:19.800480  492745 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:19:19.800519  492745 certs.go:257] generating profile certs ...
	I1009 20:19:19.800618  492745 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/client.key
	I1009 20:19:19.800656  492745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/client.crt with IP's: []
	I1009 20:19:22.386304  492745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/client.crt ...
	I1009 20:19:22.386342  492745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/client.crt: {Name:mk40e928378bc789a32b92f74569f322a3082856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:22.386539  492745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/client.key ...
	I1009 20:19:22.386555  492745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/client.key: {Name:mk0796dc0691330034c7fcacec17ca1c05b3293d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:22.386649  492745 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/apiserver.key.0bef80d8
	I1009 20:19:22.386671  492745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/apiserver.crt.0bef80d8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1009 20:19:22.701770  492745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/apiserver.crt.0bef80d8 ...
	I1009 20:19:22.701805  492745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/apiserver.crt.0bef80d8: {Name:mk15b2d0929ea8f0d0f6a8643b9ef4ad6b8e5b27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:22.701984  492745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/apiserver.key.0bef80d8 ...
	I1009 20:19:22.701996  492745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/apiserver.key.0bef80d8: {Name:mk0c0b22c1afc5c8734b0515b6fb6b14ac031118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:22.702064  492745 certs.go:382] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/apiserver.crt.0bef80d8 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/apiserver.crt
	I1009 20:19:22.702152  492745 certs.go:386] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/apiserver.key.0bef80d8 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/apiserver.key
	I1009 20:19:22.702219  492745 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/proxy-client.key
	I1009 20:19:22.702239  492745 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/proxy-client.crt with IP's: []
	I1009 20:19:23.140706  492745 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/proxy-client.crt ...
	I1009 20:19:23.140736  492745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/proxy-client.crt: {Name:mkefa3515c9fd1b8d5df426c1e0fca089fee5250 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:23.140939  492745 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/proxy-client.key ...
	I1009 20:19:23.140955  492745 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/proxy-client.key: {Name:mka74b60b18f7badbf399bc2ad520964f1fcb6ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:23.141167  492745 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:19:23.141210  492745 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:19:23.141223  492745 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:19:23.141246  492745 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:19:23.141271  492745 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:19:23.141297  492745 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:19:23.141346  492745 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:19:23.141955  492745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:19:23.162816  492745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:19:23.182242  492745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:19:23.201062  492745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:19:23.219976  492745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 20:19:23.240378  492745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:19:23.258925  492745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:19:23.278929  492745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:19:23.299217  492745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:19:23.320890  492745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:19:23.339551  492745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:19:23.358122  492745 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:19:23.372628  492745 ssh_runner.go:195] Run: openssl version
	I1009 20:19:23.378994  492745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:19:23.387978  492745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:19:23.391830  492745 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:19:23.391914  492745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:19:23.433545  492745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:19:23.444668  492745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:19:23.453418  492745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:19:23.457170  492745 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:19:23.457232  492745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:19:23.499280  492745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:19:23.508176  492745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:19:23.517054  492745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:19:23.520885  492745 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:19:23.521004  492745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:19:23.563871  492745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:19:23.573368  492745 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:19:23.577359  492745 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 20:19:23.577420  492745 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-417984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417984 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:19:23.577538  492745 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:19:23.577607  492745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:19:23.606509  492745 cri.go:89] found id: ""
	I1009 20:19:23.606631  492745 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:19:23.616546  492745 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:19:23.624640  492745 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 20:19:23.624712  492745 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:19:23.633148  492745 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:19:23.633173  492745 kubeadm.go:157] found existing configuration files:
	
	I1009 20:19:23.633229  492745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1009 20:19:23.641487  492745 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:19:23.641580  492745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:19:23.650047  492745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1009 20:19:23.658193  492745 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:19:23.658283  492745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:19:23.672067  492745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1009 20:19:23.680228  492745 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:19:23.680346  492745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:19:23.688198  492745 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1009 20:19:23.696290  492745 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:19:23.696409  492745 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:19:23.704311  492745 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 20:19:23.770982  492745 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 20:19:23.771326  492745 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 20:19:23.818808  492745 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 20:19:23.818932  492745 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 20:19:23.819017  492745 kubeadm.go:318] OS: Linux
	I1009 20:19:23.819092  492745 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 20:19:23.819179  492745 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 20:19:23.819280  492745 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 20:19:23.819387  492745 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 20:19:23.819449  492745 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 20:19:23.819510  492745 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 20:19:23.819566  492745 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 20:19:23.819624  492745 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 20:19:23.819681  492745 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 20:19:23.898399  492745 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:19:23.898524  492745 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:19:23.898632  492745 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:19:23.909706  492745 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:19:22.543114  487957 node_ready.go:49] node "embed-certs-565110" is "Ready"
	I1009 20:19:22.543141  487957 node_ready.go:38] duration metric: took 40.503775857s for node "embed-certs-565110" to be "Ready" ...
	I1009 20:19:22.543154  487957 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:19:22.543216  487957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:22.601663  487957 api_server.go:72] duration metric: took 41.634627841s to wait for apiserver process to appear ...
	I1009 20:19:22.601686  487957 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:19:22.601706  487957 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:19:22.614301  487957 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1009 20:19:22.616183  487957 api_server.go:141] control plane version: v1.34.1
	I1009 20:19:22.616263  487957 api_server.go:131] duration metric: took 14.569304ms to wait for apiserver health ...
	I1009 20:19:22.616293  487957 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:19:22.622953  487957 system_pods.go:59] 8 kube-system pods found
	I1009 20:19:22.623050  487957 system_pods.go:61] "coredns-66bc5c9577-zmqwp" [ff3de144-4c77-4486-be1e-ab88492e6a18] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:19:22.623101  487957 system_pods.go:61] "etcd-embed-certs-565110" [4ad4c426-96dc-4bd7-bf86-efc6658f3526] Running
	I1009 20:19:22.623139  487957 system_pods.go:61] "kindnet-mjfwz" [f079f818-4d35-4673-ab85-6b2fe322c9f9] Running
	I1009 20:19:22.623166  487957 system_pods.go:61] "kube-apiserver-embed-certs-565110" [5a497a15-f487-4c78-bf3e-a53c6d9f83db] Running
	I1009 20:19:22.623211  487957 system_pods.go:61] "kube-controller-manager-embed-certs-565110" [7460b871-81b4-49ff-bad1-b30126a8635c] Running
	I1009 20:19:22.623244  487957 system_pods.go:61] "kube-proxy-bhwvw" [f9d0b727-064f-4a1c-88e2-e238e5f43c4b] Running
	I1009 20:19:22.623264  487957 system_pods.go:61] "kube-scheduler-embed-certs-565110" [f706c945-9f4f-4f6d-83f8-c6cddb3ff41d] Running
	I1009 20:19:22.623309  487957 system_pods.go:61] "storage-provisioner" [9811b3ef-6b1c-42ea-a8c8-bdf0028bd024] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:19:22.623344  487957 system_pods.go:74] duration metric: took 7.007192ms to wait for pod list to return data ...
	I1009 20:19:22.623385  487957 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:19:22.635456  487957 default_sa.go:45] found service account: "default"
	I1009 20:19:22.635561  487957 default_sa.go:55] duration metric: took 12.151254ms for default service account to be created ...
	I1009 20:19:22.635601  487957 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:19:22.724490  487957 system_pods.go:86] 8 kube-system pods found
	I1009 20:19:22.724610  487957 system_pods.go:89] "coredns-66bc5c9577-zmqwp" [ff3de144-4c77-4486-be1e-ab88492e6a18] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:19:22.724636  487957 system_pods.go:89] "etcd-embed-certs-565110" [4ad4c426-96dc-4bd7-bf86-efc6658f3526] Running
	I1009 20:19:22.724686  487957 system_pods.go:89] "kindnet-mjfwz" [f079f818-4d35-4673-ab85-6b2fe322c9f9] Running
	I1009 20:19:22.724725  487957 system_pods.go:89] "kube-apiserver-embed-certs-565110" [5a497a15-f487-4c78-bf3e-a53c6d9f83db] Running
	I1009 20:19:22.724780  487957 system_pods.go:89] "kube-controller-manager-embed-certs-565110" [7460b871-81b4-49ff-bad1-b30126a8635c] Running
	I1009 20:19:22.724815  487957 system_pods.go:89] "kube-proxy-bhwvw" [f9d0b727-064f-4a1c-88e2-e238e5f43c4b] Running
	I1009 20:19:22.724864  487957 system_pods.go:89] "kube-scheduler-embed-certs-565110" [f706c945-9f4f-4f6d-83f8-c6cddb3ff41d] Running
	I1009 20:19:22.724904  487957 system_pods.go:89] "storage-provisioner" [9811b3ef-6b1c-42ea-a8c8-bdf0028bd024] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:19:22.724967  487957 retry.go:31] will retry after 295.169271ms: missing components: kube-dns
	I1009 20:19:23.024412  487957 system_pods.go:86] 8 kube-system pods found
	I1009 20:19:23.024493  487957 system_pods.go:89] "coredns-66bc5c9577-zmqwp" [ff3de144-4c77-4486-be1e-ab88492e6a18] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:19:23.024517  487957 system_pods.go:89] "etcd-embed-certs-565110" [4ad4c426-96dc-4bd7-bf86-efc6658f3526] Running
	I1009 20:19:23.024562  487957 system_pods.go:89] "kindnet-mjfwz" [f079f818-4d35-4673-ab85-6b2fe322c9f9] Running
	I1009 20:19:23.024590  487957 system_pods.go:89] "kube-apiserver-embed-certs-565110" [5a497a15-f487-4c78-bf3e-a53c6d9f83db] Running
	I1009 20:19:23.024612  487957 system_pods.go:89] "kube-controller-manager-embed-certs-565110" [7460b871-81b4-49ff-bad1-b30126a8635c] Running
	I1009 20:19:23.024649  487957 system_pods.go:89] "kube-proxy-bhwvw" [f9d0b727-064f-4a1c-88e2-e238e5f43c4b] Running
	I1009 20:19:23.024675  487957 system_pods.go:89] "kube-scheduler-embed-certs-565110" [f706c945-9f4f-4f6d-83f8-c6cddb3ff41d] Running
	I1009 20:19:23.024700  487957 system_pods.go:89] "storage-provisioner" [9811b3ef-6b1c-42ea-a8c8-bdf0028bd024] Running
	I1009 20:19:23.024737  487957 system_pods.go:126] duration metric: took 389.08451ms to wait for k8s-apps to be running ...
	I1009 20:19:23.024766  487957 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:19:23.024857  487957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:19:23.044665  487957 system_svc.go:56] duration metric: took 19.885552ms WaitForService to wait for kubelet
	I1009 20:19:23.044743  487957 kubeadm.go:586] duration metric: took 42.077721285s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:19:23.044778  487957 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:19:23.048553  487957 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:19:23.048633  487957 node_conditions.go:123] node cpu capacity is 2
	I1009 20:19:23.048669  487957 node_conditions.go:105] duration metric: took 3.871535ms to run NodePressure ...
	I1009 20:19:23.048711  487957 start.go:242] waiting for startup goroutines ...
	I1009 20:19:23.048737  487957 start.go:247] waiting for cluster config update ...
	I1009 20:19:23.048763  487957 start.go:256] writing updated cluster config ...
	I1009 20:19:23.049189  487957 ssh_runner.go:195] Run: rm -f paused
	I1009 20:19:23.053210  487957 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:19:23.058101  487957 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zmqwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:19:24.064650  487957 pod_ready.go:94] pod "coredns-66bc5c9577-zmqwp" is "Ready"
	I1009 20:19:24.064683  487957 pod_ready.go:86] duration metric: took 1.006509038s for pod "coredns-66bc5c9577-zmqwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:19:24.068000  487957 pod_ready.go:83] waiting for pod "etcd-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:19:24.077434  487957 pod_ready.go:94] pod "etcd-embed-certs-565110" is "Ready"
	I1009 20:19:24.077464  487957 pod_ready.go:86] duration metric: took 9.44121ms for pod "etcd-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:19:24.080417  487957 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:19:24.089036  487957 pod_ready.go:94] pod "kube-apiserver-embed-certs-565110" is "Ready"
	I1009 20:19:24.089062  487957 pod_ready.go:86] duration metric: took 8.619456ms for pod "kube-apiserver-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:19:24.091793  487957 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:19:24.263892  487957 pod_ready.go:94] pod "kube-controller-manager-embed-certs-565110" is "Ready"
	I1009 20:19:24.263930  487957 pod_ready.go:86] duration metric: took 172.114886ms for pod "kube-controller-manager-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:19:24.463114  487957 pod_ready.go:83] waiting for pod "kube-proxy-bhwvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:19:24.865977  487957 pod_ready.go:94] pod "kube-proxy-bhwvw" is "Ready"
	I1009 20:19:24.866007  487957 pod_ready.go:86] duration metric: took 402.865163ms for pod "kube-proxy-bhwvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:19:25.063330  487957 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:19:25.462506  487957 pod_ready.go:94] pod "kube-scheduler-embed-certs-565110" is "Ready"
	I1009 20:19:25.462608  487957 pod_ready.go:86] duration metric: took 399.200085ms for pod "kube-scheduler-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:19:25.462650  487957 pod_ready.go:40] duration metric: took 2.409359878s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:19:25.544573  487957 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:19:25.549898  487957 out.go:179] * Done! kubectl is now configured to use "embed-certs-565110" cluster and "default" namespace by default
	I1009 20:19:23.914894  492745 out.go:252]   - Generating certificates and keys ...
	I1009 20:19:23.915047  492745 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 20:19:23.915140  492745 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 20:19:25.380075  492745 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 20:19:28.513706  492745 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 20:19:29.034026  492745 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 20:19:29.454738  492745 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 20:19:30.040402  492745 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 20:19:30.040596  492745 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-417984 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 20:19:30.921674  492745 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 20:19:30.922077  492745 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-417984 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 20:19:31.281016  492745 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 20:19:31.930297  492745 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 20:19:32.704878  492745 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 20:19:32.705222  492745 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:19:33.227017  492745 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:19:33.364780  492745 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:19:33.861713  492745 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:19:34.115716  492745 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:19:34.299046  492745 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:19:34.299734  492745 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:19:34.302423  492745 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Oct 09 20:19:22 embed-certs-565110 crio[840]: time="2025-10-09T20:19:22.570267472Z" level=info msg="Created container 009af2ec019d7c65b2d25a7c1c215c26ff10f334e57dff924a4d3fbfb0f74394: kube-system/coredns-66bc5c9577-zmqwp/coredns" id=53bfb20c-57c9-4fc4-a9df-76ebf67d5dba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:19:22 embed-certs-565110 crio[840]: time="2025-10-09T20:19:22.58180156Z" level=info msg="Starting container: 009af2ec019d7c65b2d25a7c1c215c26ff10f334e57dff924a4d3fbfb0f74394" id=081cc33b-a996-4fe4-9655-bb801a20cde3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:19:22 embed-certs-565110 crio[840]: time="2025-10-09T20:19:22.589982938Z" level=info msg="Started container" PID=1714 containerID=009af2ec019d7c65b2d25a7c1c215c26ff10f334e57dff924a4d3fbfb0f74394 description=kube-system/coredns-66bc5c9577-zmqwp/coredns id=081cc33b-a996-4fe4-9655-bb801a20cde3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8c751ca38741f63f4cbfba08b5b654ad1ffaf8c0bdf6ef19a626689cc2a04274
	Oct 09 20:19:26 embed-certs-565110 crio[840]: time="2025-10-09T20:19:26.16351364Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7349de41-51e3-4279-8c8c-d6258140a4ed name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:19:26 embed-certs-565110 crio[840]: time="2025-10-09T20:19:26.163601108Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:19:26 embed-certs-565110 crio[840]: time="2025-10-09T20:19:26.178092659Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:939781aaa199230858a08579f1eabaf54640e88060a07dc91883461241007963 UID:dd2912f1-74cf-4ef4-86cf-f321b48ea8d9 NetNS:/var/run/netns/a37ca5a6-5fc2-4eca-946e-47d11d8a66b9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004b5388}] Aliases:map[]}"
	Oct 09 20:19:26 embed-certs-565110 crio[840]: time="2025-10-09T20:19:26.178265076Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 09 20:19:26 embed-certs-565110 crio[840]: time="2025-10-09T20:19:26.203029906Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:939781aaa199230858a08579f1eabaf54640e88060a07dc91883461241007963 UID:dd2912f1-74cf-4ef4-86cf-f321b48ea8d9 NetNS:/var/run/netns/a37ca5a6-5fc2-4eca-946e-47d11d8a66b9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004b5388}] Aliases:map[]}"
	Oct 09 20:19:26 embed-certs-565110 crio[840]: time="2025-10-09T20:19:26.204814102Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 09 20:19:26 embed-certs-565110 crio[840]: time="2025-10-09T20:19:26.221497034Z" level=info msg="Ran pod sandbox 939781aaa199230858a08579f1eabaf54640e88060a07dc91883461241007963 with infra container: default/busybox/POD" id=7349de41-51e3-4279-8c8c-d6258140a4ed name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:19:26 embed-certs-565110 crio[840]: time="2025-10-09T20:19:26.223053634Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e93b1f52-671e-4196-89c4-9e305715a4fa name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:19:26 embed-certs-565110 crio[840]: time="2025-10-09T20:19:26.223345937Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e93b1f52-671e-4196-89c4-9e305715a4fa name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:19:26 embed-certs-565110 crio[840]: time="2025-10-09T20:19:26.223451301Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e93b1f52-671e-4196-89c4-9e305715a4fa name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:19:26 embed-certs-565110 crio[840]: time="2025-10-09T20:19:26.228454043Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d8beea95-50e5-42a0-a0a7-027e51bd23bc name=/runtime.v1.ImageService/PullImage
	Oct 09 20:19:26 embed-certs-565110 crio[840]: time="2025-10-09T20:19:26.233486833Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 09 20:19:28 embed-certs-565110 crio[840]: time="2025-10-09T20:19:28.254623337Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=d8beea95-50e5-42a0-a0a7-027e51bd23bc name=/runtime.v1.ImageService/PullImage
	Oct 09 20:19:28 embed-certs-565110 crio[840]: time="2025-10-09T20:19:28.255871141Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ff8a76ef-2708-4b1e-ad01-524d1b840ad0 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:19:28 embed-certs-565110 crio[840]: time="2025-10-09T20:19:28.25986285Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=87773ff0-d15f-406d-b282-5b13d5657716 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:19:28 embed-certs-565110 crio[840]: time="2025-10-09T20:19:28.269523378Z" level=info msg="Creating container: default/busybox/busybox" id=ed0df594-6d52-480c-8f85-118d1fee1c42 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:19:28 embed-certs-565110 crio[840]: time="2025-10-09T20:19:28.270492252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:19:28 embed-certs-565110 crio[840]: time="2025-10-09T20:19:28.278469796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:19:28 embed-certs-565110 crio[840]: time="2025-10-09T20:19:28.279103749Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:19:28 embed-certs-565110 crio[840]: time="2025-10-09T20:19:28.301877741Z" level=info msg="Created container 5118e1aace93e6e4088f602c82438c9c17b976f3247cf44fb936ee9ee7ec5968: default/busybox/busybox" id=ed0df594-6d52-480c-8f85-118d1fee1c42 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:19:28 embed-certs-565110 crio[840]: time="2025-10-09T20:19:28.309442241Z" level=info msg="Starting container: 5118e1aace93e6e4088f602c82438c9c17b976f3247cf44fb936ee9ee7ec5968" id=3ebda500-ed79-44d5-9c0b-2a8f6a95df1d name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:19:28 embed-certs-565110 crio[840]: time="2025-10-09T20:19:28.312723097Z" level=info msg="Started container" PID=1772 containerID=5118e1aace93e6e4088f602c82438c9c17b976f3247cf44fb936ee9ee7ec5968 description=default/busybox/busybox id=3ebda500-ed79-44d5-9c0b-2a8f6a95df1d name=/runtime.v1.RuntimeService/StartContainer sandboxID=939781aaa199230858a08579f1eabaf54640e88060a07dc91883461241007963
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	5118e1aace93e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   939781aaa1992       busybox                                      default
	009af2ec019d7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   8c751ca38741f       coredns-66bc5c9577-zmqwp                     kube-system
	096047a9255e2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   d73b2ed2b7a93       storage-provisioner                          kube-system
	67d9483c50467       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   dcd345ece9cc3       kindnet-mjfwz                                kube-system
	daeba97a296fd       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   5f094211c82f1       kube-proxy-bhwvw                             kube-system
	cfe45edaee703       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   16a8cd1b47042       kube-scheduler-embed-certs-565110            kube-system
	5da61e2927d83       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   a845b5aaeb1d9       kube-controller-manager-embed-certs-565110   kube-system
	6ea92e71b7ed8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   e0f1c2a6a65b3       kube-apiserver-embed-certs-565110            kube-system
	f3dec093bfa32       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   d0575e01ff8ae       etcd-embed-certs-565110                      kube-system
	
	
	==> coredns [009af2ec019d7c65b2d25a7c1c215c26ff10f334e57dff924a4d3fbfb0f74394] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57846 - 54522 "HINFO IN 5320155218389741512.3249909265130754890. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018422886s
	
	
	==> describe nodes <==
	Name:               embed-certs-565110
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-565110
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=embed-certs-565110
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T20_18_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 20:18:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-565110
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 20:19:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 20:19:22 +0000   Thu, 09 Oct 2025 20:18:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 20:19:22 +0000   Thu, 09 Oct 2025 20:18:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 20:19:22 +0000   Thu, 09 Oct 2025 20:18:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 20:19:22 +0000   Thu, 09 Oct 2025 20:19:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-565110
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa2ab7adb6b64bfb89d9ee9bcb860962
	  System UUID:                b35d8597-f430-4f2f-bbdb-0cd122e89c1c
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-zmqwp                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-565110                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         63s
	  kube-system                 kindnet-mjfwz                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-565110             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-embed-certs-565110    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-bhwvw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-embed-certs-565110             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 55s   kube-proxy       
	  Normal   Starting                 62s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s   kubelet          Node embed-certs-565110 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s   kubelet          Node embed-certs-565110 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s   kubelet          Node embed-certs-565110 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s   node-controller  Node embed-certs-565110 event: Registered Node embed-certs-565110 in Controller
	  Normal   NodeReady                15s   kubelet          Node embed-certs-565110 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 19:48] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:50] overlayfs: idmapped layers are currently not supported
	[ +27.967875] overlayfs: idmapped layers are currently not supported
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:04] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:15] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:16] overlayfs: idmapped layers are currently not supported
	[ +23.810739] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:18] overlayfs: idmapped layers are currently not supported
	[ +26.082927] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f3dec093bfa32d2778bed6e88d3a535d84f438f29b2b292cddc0321fe72bbe66] <==
	{"level":"warn","ts":"2025-10-09T20:18:31.558547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.582388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.598356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.616546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.640510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.650947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.675392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.690106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.708938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.727943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.744658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.765589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.790989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.804645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.822176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.839750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.857613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.874664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.893635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.908846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.924527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.952324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.972951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:31.985986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:18:32.064203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42196","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:19:37 up  3:01,  0 user,  load average: 3.15, 2.46, 1.91
	Linux embed-certs-565110 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [67d9483c50467680471ab03ef1d4343db0ad791abebfcfa1ab0209c8df9fdf60] <==
	I1009 20:18:41.501780       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:18:41.502022       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1009 20:18:41.502132       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:18:41.502144       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:18:41.502155       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:18:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:18:41.703049       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:18:41.706434       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:18:41.706477       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:18:41.707126       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 20:19:11.620407       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 20:19:11.703007       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1009 20:19:11.707586       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 20:19:11.707586       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1009 20:19:13.307501       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 20:19:13.307602       1 metrics.go:72] Registering metrics
	I1009 20:19:13.307696       1 controller.go:711] "Syncing nftables rules"
	I1009 20:19:21.621214       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 20:19:21.621399       1 main.go:301] handling current node
	I1009 20:19:31.618905       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 20:19:31.619019       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6ea92e71b7ed850221ecbd0dddf8d35aab8c0e0747dfb052adf71baead9d0c46] <==
	I1009 20:18:33.061280       1 controller.go:667] quota admission added evaluator for: namespaces
	E1009 20:18:33.081204       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1009 20:18:33.112960       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:18:33.113218       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1009 20:18:33.171061       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:18:33.171136       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 20:18:33.229974       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 20:18:33.650792       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1009 20:18:33.659114       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1009 20:18:33.659134       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:18:34.477842       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:18:34.530512       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:18:34.669498       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1009 20:18:34.677291       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1009 20:18:34.678544       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 20:18:34.683560       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 20:18:34.810423       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 20:18:35.537625       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 20:18:35.555752       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1009 20:18:35.567445       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1009 20:18:40.461332       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:18:40.466238       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:18:40.661014       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1009 20:18:40.762838       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1009 20:19:35.058808       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:33902: use of closed network connection
	
	
	==> kube-controller-manager [5da61e2927d83d11e1606ed64b17c8169df4cdc25631c3950e054f09e3832277] <==
	I1009 20:18:39.852676       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1009 20:18:39.854017       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1009 20:18:39.854036       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 20:18:39.854081       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 20:18:39.854222       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1009 20:18:39.854018       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1009 20:18:39.854675       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1009 20:18:39.854684       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1009 20:18:39.854717       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 20:18:39.855844       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1009 20:18:39.855929       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 20:18:39.857225       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1009 20:18:39.862315       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1009 20:18:39.862448       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 20:18:39.862479       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 20:18:39.862484       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 20:18:39.862490       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 20:18:39.862378       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 20:18:39.867901       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 20:18:39.872228       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:18:39.872251       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 20:18:39.872258       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 20:18:39.876271       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-565110" podCIDRs=["10.244.0.0/24"]
	I1009 20:18:39.883766       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 20:19:24.859604       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [daeba97a296fd34678b5ff4375288ce0222d102e2fa6a48256f52c76d9810639] <==
	I1009 20:18:41.396448       1 server_linux.go:53] "Using iptables proxy"
	I1009 20:18:41.530857       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 20:18:41.631711       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 20:18:41.631744       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1009 20:18:41.631812       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:18:41.764583       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:18:41.768778       1 server_linux.go:132] "Using iptables Proxier"
	I1009 20:18:41.800454       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:18:41.800815       1 server.go:527] "Version info" version="v1.34.1"
	I1009 20:18:41.800834       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:18:41.802344       1 config.go:200] "Starting service config controller"
	I1009 20:18:41.802355       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 20:18:41.802372       1 config.go:106] "Starting endpoint slice config controller"
	I1009 20:18:41.802378       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 20:18:41.802401       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 20:18:41.802406       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 20:18:41.811547       1 config.go:309] "Starting node config controller"
	I1009 20:18:41.815382       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 20:18:41.815483       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 20:18:41.908503       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 20:18:41.908549       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 20:18:41.908597       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cfe45edaee703dfe76a38ecf8d3a57febf096f46eae639190bbb0fef06c632f6] <==
	E1009 20:18:32.942637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 20:18:32.942739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 20:18:32.942847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 20:18:32.942954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 20:18:32.943066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1009 20:18:32.943162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 20:18:32.943242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 20:18:32.943473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1009 20:18:32.951104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 20:18:32.951393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 20:18:32.951455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 20:18:33.770694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 20:18:33.791964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 20:18:33.798418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1009 20:18:33.809309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1009 20:18:33.871029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1009 20:18:33.977972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 20:18:34.007570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 20:18:34.047829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 20:18:34.096517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1009 20:18:34.113773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 20:18:34.138220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 20:18:34.164032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 20:18:34.180440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1009 20:18:37.195736       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 20:18:36 embed-certs-565110 kubelet[1284]: I1009 20:18:36.691783    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-565110" podStartSLOduration=1.691751867 podStartE2EDuration="1.691751867s" podCreationTimestamp="2025-10-09 20:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:18:36.679544576 +0000 UTC m=+1.294091627" watchObservedRunningTime="2025-10-09 20:18:36.691751867 +0000 UTC m=+1.306298884"
	Oct 09 20:18:39 embed-certs-565110 kubelet[1284]: I1009 20:18:39.902914    1284 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 09 20:18:39 embed-certs-565110 kubelet[1284]: I1009 20:18:39.904267    1284 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 09 20:18:40 embed-certs-565110 kubelet[1284]: I1009 20:18:40.764856    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9d0b727-064f-4a1c-88e2-e238e5f43c4b-xtables-lock\") pod \"kube-proxy-bhwvw\" (UID: \"f9d0b727-064f-4a1c-88e2-e238e5f43c4b\") " pod="kube-system/kube-proxy-bhwvw"
	Oct 09 20:18:40 embed-certs-565110 kubelet[1284]: I1009 20:18:40.764910    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f9d0b727-064f-4a1c-88e2-e238e5f43c4b-kube-proxy\") pod \"kube-proxy-bhwvw\" (UID: \"f9d0b727-064f-4a1c-88e2-e238e5f43c4b\") " pod="kube-system/kube-proxy-bhwvw"
	Oct 09 20:18:40 embed-certs-565110 kubelet[1284]: I1009 20:18:40.764932    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ltxb\" (UniqueName: \"kubernetes.io/projected/f9d0b727-064f-4a1c-88e2-e238e5f43c4b-kube-api-access-7ltxb\") pod \"kube-proxy-bhwvw\" (UID: \"f9d0b727-064f-4a1c-88e2-e238e5f43c4b\") " pod="kube-system/kube-proxy-bhwvw"
	Oct 09 20:18:40 embed-certs-565110 kubelet[1284]: I1009 20:18:40.764968    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9d0b727-064f-4a1c-88e2-e238e5f43c4b-lib-modules\") pod \"kube-proxy-bhwvw\" (UID: \"f9d0b727-064f-4a1c-88e2-e238e5f43c4b\") " pod="kube-system/kube-proxy-bhwvw"
	Oct 09 20:18:40 embed-certs-565110 kubelet[1284]: I1009 20:18:40.764986    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f079f818-4d35-4673-ab85-6b2fe322c9f9-cni-cfg\") pod \"kindnet-mjfwz\" (UID: \"f079f818-4d35-4673-ab85-6b2fe322c9f9\") " pod="kube-system/kindnet-mjfwz"
	Oct 09 20:18:40 embed-certs-565110 kubelet[1284]: I1009 20:18:40.765012    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f079f818-4d35-4673-ab85-6b2fe322c9f9-lib-modules\") pod \"kindnet-mjfwz\" (UID: \"f079f818-4d35-4673-ab85-6b2fe322c9f9\") " pod="kube-system/kindnet-mjfwz"
	Oct 09 20:18:40 embed-certs-565110 kubelet[1284]: I1009 20:18:40.765032    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rln5x\" (UniqueName: \"kubernetes.io/projected/f079f818-4d35-4673-ab85-6b2fe322c9f9-kube-api-access-rln5x\") pod \"kindnet-mjfwz\" (UID: \"f079f818-4d35-4673-ab85-6b2fe322c9f9\") " pod="kube-system/kindnet-mjfwz"
	Oct 09 20:18:40 embed-certs-565110 kubelet[1284]: I1009 20:18:40.765049    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f079f818-4d35-4673-ab85-6b2fe322c9f9-xtables-lock\") pod \"kindnet-mjfwz\" (UID: \"f079f818-4d35-4673-ab85-6b2fe322c9f9\") " pod="kube-system/kindnet-mjfwz"
	Oct 09 20:18:40 embed-certs-565110 kubelet[1284]: I1009 20:18:40.882900    1284 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 09 20:18:41 embed-certs-565110 kubelet[1284]: W1009 20:18:41.084755    1284 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85/crio-5f094211c82f18d63e5f5c494d40324fc41250737146904b14a9ea7cf5990d24 WatchSource:0}: Error finding container 5f094211c82f18d63e5f5c494d40324fc41250737146904b14a9ea7cf5990d24: Status 404 returned error can't find the container with id 5f094211c82f18d63e5f5c494d40324fc41250737146904b14a9ea7cf5990d24
	Oct 09 20:18:41 embed-certs-565110 kubelet[1284]: I1009 20:18:41.752566    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bhwvw" podStartSLOduration=1.7525468100000001 podStartE2EDuration="1.75254681s" podCreationTimestamp="2025-10-09 20:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:18:41.752174563 +0000 UTC m=+6.366721589" watchObservedRunningTime="2025-10-09 20:18:41.75254681 +0000 UTC m=+6.367093828"
	Oct 09 20:18:41 embed-certs-565110 kubelet[1284]: I1009 20:18:41.752673    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-mjfwz" podStartSLOduration=1.7526674679999998 podStartE2EDuration="1.752667468s" podCreationTimestamp="2025-10-09 20:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:18:41.701898055 +0000 UTC m=+6.316445089" watchObservedRunningTime="2025-10-09 20:18:41.752667468 +0000 UTC m=+6.367214503"
	Oct 09 20:19:22 embed-certs-565110 kubelet[1284]: I1009 20:19:22.085622    1284 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 09 20:19:22 embed-certs-565110 kubelet[1284]: I1009 20:19:22.186789    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsfv5\" (UniqueName: \"kubernetes.io/projected/ff3de144-4c77-4486-be1e-ab88492e6a18-kube-api-access-nsfv5\") pod \"coredns-66bc5c9577-zmqwp\" (UID: \"ff3de144-4c77-4486-be1e-ab88492e6a18\") " pod="kube-system/coredns-66bc5c9577-zmqwp"
	Oct 09 20:19:22 embed-certs-565110 kubelet[1284]: I1009 20:19:22.186844    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2k5d\" (UniqueName: \"kubernetes.io/projected/9811b3ef-6b1c-42ea-a8c8-bdf0028bd024-kube-api-access-j2k5d\") pod \"storage-provisioner\" (UID: \"9811b3ef-6b1c-42ea-a8c8-bdf0028bd024\") " pod="kube-system/storage-provisioner"
	Oct 09 20:19:22 embed-certs-565110 kubelet[1284]: I1009 20:19:22.186879    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9811b3ef-6b1c-42ea-a8c8-bdf0028bd024-tmp\") pod \"storage-provisioner\" (UID: \"9811b3ef-6b1c-42ea-a8c8-bdf0028bd024\") " pod="kube-system/storage-provisioner"
	Oct 09 20:19:22 embed-certs-565110 kubelet[1284]: I1009 20:19:22.186906    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff3de144-4c77-4486-be1e-ab88492e6a18-config-volume\") pod \"coredns-66bc5c9577-zmqwp\" (UID: \"ff3de144-4c77-4486-be1e-ab88492e6a18\") " pod="kube-system/coredns-66bc5c9577-zmqwp"
	Oct 09 20:19:22 embed-certs-565110 kubelet[1284]: W1009 20:19:22.491031    1284 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85/crio-8c751ca38741f63f4cbfba08b5b654ad1ffaf8c0bdf6ef19a626689cc2a04274 WatchSource:0}: Error finding container 8c751ca38741f63f4cbfba08b5b654ad1ffaf8c0bdf6ef19a626689cc2a04274: Status 404 returned error can't find the container with id 8c751ca38741f63f4cbfba08b5b654ad1ffaf8c0bdf6ef19a626689cc2a04274
	Oct 09 20:19:22 embed-certs-565110 kubelet[1284]: I1009 20:19:22.861550    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.861530133 podStartE2EDuration="40.861530133s" podCreationTimestamp="2025-10-09 20:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:19:22.861490092 +0000 UTC m=+47.476037110" watchObservedRunningTime="2025-10-09 20:19:22.861530133 +0000 UTC m=+47.476077159"
	Oct 09 20:19:22 embed-certs-565110 kubelet[1284]: I1009 20:19:22.861812    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zmqwp" podStartSLOduration=41.861802958 podStartE2EDuration="41.861802958s" podCreationTimestamp="2025-10-09 20:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:19:22.833531725 +0000 UTC m=+47.448078784" watchObservedRunningTime="2025-10-09 20:19:22.861802958 +0000 UTC m=+47.476349975"
	Oct 09 20:19:26 embed-certs-565110 kubelet[1284]: I1009 20:19:26.025703    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smst2\" (UniqueName: \"kubernetes.io/projected/dd2912f1-74cf-4ef4-86cf-f321b48ea8d9-kube-api-access-smst2\") pod \"busybox\" (UID: \"dd2912f1-74cf-4ef4-86cf-f321b48ea8d9\") " pod="default/busybox"
	Oct 09 20:19:26 embed-certs-565110 kubelet[1284]: W1009 20:19:26.214358    1284 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85/crio-939781aaa199230858a08579f1eabaf54640e88060a07dc91883461241007963 WatchSource:0}: Error finding container 939781aaa199230858a08579f1eabaf54640e88060a07dc91883461241007963: Status 404 returned error can't find the container with id 939781aaa199230858a08579f1eabaf54640e88060a07dc91883461241007963
	
	
	==> storage-provisioner [096047a9255e2d5549f29d24128253283a6075a51a6710f56e996335e326f921] <==
	I1009 20:19:22.599565       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:19:22.652432       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:19:22.652645       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 20:19:22.657989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:22.680947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:19:22.681297       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:19:22.681576       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-565110_853ec8f0-22f6-434c-96e9-b3138c74b62e!
	I1009 20:19:22.685480       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"363137e6-edc1-40e3-81f2-14e316bf471f", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-565110_853ec8f0-22f6-434c-96e9-b3138c74b62e became leader
	W1009 20:19:22.708654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:22.729901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:19:22.795971       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-565110_853ec8f0-22f6-434c-96e9-b3138c74b62e!
	W1009 20:19:24.733540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:24.739667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:26.743985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:26.751411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:28.755422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:28.761222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:30.764922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:30.771582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:32.776484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:32.784850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:34.788333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:34.794488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:36.798667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:19:36.818641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-565110 -n embed-certs-565110
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-565110 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-417984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-417984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (268.256497ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:20:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-417984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-417984 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-417984 describe deploy/metrics-server -n kube-system: exit status 1 (109.865648ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-417984 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-417984
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-417984:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670",
	        "Created": "2025-10-09T20:19:12.869398438Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 493134,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:19:12.930632778Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/hosts",
	        "LogPath": "/var/lib/docker/containers/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670-json.log",
	        "Name": "/default-k8s-diff-port-417984",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-417984:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-417984",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670",
	                "LowerDir": "/var/lib/docker/overlay2/69199908f673e21207f723026f89b47767e510c7bef43a60d014ab9a5dff4f7d-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69199908f673e21207f723026f89b47767e510c7bef43a60d014ab9a5dff4f7d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69199908f673e21207f723026f89b47767e510c7bef43a60d014ab9a5dff4f7d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69199908f673e21207f723026f89b47767e510c7bef43a60d014ab9a5dff4f7d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-417984",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-417984/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-417984",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-417984",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-417984",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b2c0af5c65faa3872d1c2b8f33dbabcabab1006b07743223c533033f81f2e36f",
	            "SandboxKey": "/var/run/docker/netns/b2c0af5c65fa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-417984": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:31:51:8f:29:c0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "08acd2192c7aac80b9d6df51ab71eaa1736eaa95c3d16e0c4f8feb8f8a4a1db2",
	                    "EndpointID": "ba0e6319d61052c9a383b2fc299368e2c61547d63dcaefa77f54c8609426ac4b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-417984",
	                        "1f0d0c8a230b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-417984 -n default-k8s-diff-port-417984
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-417984 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-417984 logs -n 25: (1.239100699s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:17 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-670649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │                     │
	│ stop    │ -p old-k8s-version-670649 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-670649 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-020313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	│ stop    │ -p no-preload-020313 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ image   │ old-k8s-version-670649 image list --format=json                                                                                                                                                                                               │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ pause   │ -p old-k8s-version-670649 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-020313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ start   │ -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:18 UTC │
	│ delete  │ -p old-k8s-version-670649                                                                                                                                                                                                                     │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ delete  │ -p old-k8s-version-670649                                                                                                                                                                                                                     │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:18 UTC │
	│ start   │ -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:19 UTC │
	│ image   │ no-preload-020313 image list --format=json                                                                                                                                                                                                    │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:18 UTC │
	│ pause   │ -p no-preload-020313 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │                     │
	│ delete  │ -p no-preload-020313                                                                                                                                                                                                                          │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ delete  │ -p no-preload-020313                                                                                                                                                                                                                          │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ delete  │ -p disable-driver-mounts-613966                                                                                                                                                                                                               │ disable-driver-mounts-613966 │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ start   │ -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-565110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │                     │
	│ stop    │ -p embed-certs-565110 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-565110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ start   │ -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-417984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:19:51
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:19:51.102892  495888 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:19:51.103497  495888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:19:51.103530  495888 out.go:374] Setting ErrFile to fd 2...
	I1009 20:19:51.103552  495888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:19:51.103850  495888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:19:51.104352  495888 out.go:368] Setting JSON to false
	I1009 20:19:51.105472  495888 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10931,"bootTime":1760030261,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:19:51.105578  495888 start.go:143] virtualization:  
	I1009 20:19:51.109357  495888 out.go:179] * [embed-certs-565110] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:19:51.112633  495888 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:19:51.112721  495888 notify.go:221] Checking for updates...
	I1009 20:19:51.116857  495888 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:19:51.119909  495888 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:19:51.122999  495888 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:19:51.126053  495888 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:19:51.131523  495888 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:19:51.136756  495888 config.go:182] Loaded profile config "embed-certs-565110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:19:51.137593  495888 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:19:51.187119  495888 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:19:51.187242  495888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:19:51.297737  495888 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:19:51.282094864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:19:51.297886  495888 docker.go:319] overlay module found
	I1009 20:19:51.303198  495888 out.go:179] * Using the docker driver based on existing profile
	I1009 20:19:51.306311  495888 start.go:309] selected driver: docker
	I1009 20:19:51.306342  495888 start.go:930] validating driver "docker" against &{Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:19:51.306461  495888 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:19:51.307345  495888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:19:51.421645  495888 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:19:51.408696868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:19:51.422009  495888 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:19:51.422036  495888 cni.go:84] Creating CNI manager for ""
	I1009 20:19:51.422095  495888 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:19:51.422133  495888 start.go:353] cluster config:
	{Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:19:51.425473  495888 out.go:179] * Starting "embed-certs-565110" primary control-plane node in "embed-certs-565110" cluster
	I1009 20:19:51.428909  495888 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:19:51.431951  495888 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:19:49.888113  492745 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:19:49.888143  492745 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:19:49.888230  492745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:19:49.908801  492745 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:19:49.908825  492745 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:19:49.908912  492745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:19:49.938475  492745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:19:49.953320  492745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:19:50.098321  492745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 20:19:50.189335  492745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:19:50.195949  492745 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:19:50.304820  492745 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:19:50.669429  492745 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1009 20:19:50.671170  492745 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-417984" to be "Ready" ...
	I1009 20:19:51.187848  492745 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-417984" context rescaled to 1 replicas
	I1009 20:19:51.426572  492745 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.230578231s)
	I1009 20:19:51.426624  492745 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.121763432s)
	I1009 20:19:51.451362  492745 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1009 20:19:51.434864  495888 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:19:51.434951  495888 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 20:19:51.434963  495888 cache.go:58] Caching tarball of preloaded images
	I1009 20:19:51.435068  495888 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 20:19:51.435079  495888 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 20:19:51.435197  495888 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/config.json ...
	I1009 20:19:51.435452  495888 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:19:51.457590  495888 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:19:51.457609  495888 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:19:51.457621  495888 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:19:51.457644  495888 start.go:361] acquireMachinesLock for embed-certs-565110: {Name:mk32ec325145c7dbf708685a0b7d3c4450230c14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:19:51.457699  495888 start.go:365] duration metric: took 38.269µs to acquireMachinesLock for "embed-certs-565110"
	I1009 20:19:51.457718  495888 start.go:97] Skipping create...Using existing machine configuration
	I1009 20:19:51.457724  495888 fix.go:55] fixHost starting: 
	I1009 20:19:51.457987  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:51.478706  495888 fix.go:113] recreateIfNeeded on embed-certs-565110: state=Stopped err=<nil>
	W1009 20:19:51.478734  495888 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 20:19:51.454825  492745 addons.go:514] duration metric: took 1.631795731s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1009 20:19:51.482974  495888 out.go:252] * Restarting existing docker container for "embed-certs-565110" ...
	I1009 20:19:51.483091  495888 cli_runner.go:164] Run: docker start embed-certs-565110
	I1009 20:19:51.807227  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:51.842702  495888 kic.go:430] container "embed-certs-565110" state is running.
	I1009 20:19:51.843128  495888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-565110
	I1009 20:19:51.877524  495888 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/config.json ...
	I1009 20:19:51.877765  495888 machine.go:93] provisionDockerMachine start ...
	I1009 20:19:51.877836  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:51.907208  495888 main.go:141] libmachine: Using SSH client type: native
	I1009 20:19:51.907536  495888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1009 20:19:51.907553  495888 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:19:51.908974  495888 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33816->127.0.0.1:33446: read: connection reset by peer
	I1009 20:19:55.081220  495888 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-565110
	
	I1009 20:19:55.081310  495888 ubuntu.go:182] provisioning hostname "embed-certs-565110"
	I1009 20:19:55.081383  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:55.100169  495888 main.go:141] libmachine: Using SSH client type: native
	I1009 20:19:55.100476  495888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1009 20:19:55.100493  495888 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-565110 && echo "embed-certs-565110" | sudo tee /etc/hostname
	I1009 20:19:55.264075  495888 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-565110
	
	I1009 20:19:55.264186  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:55.282454  495888 main.go:141] libmachine: Using SSH client type: native
	I1009 20:19:55.282834  495888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1009 20:19:55.282859  495888 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-565110' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-565110/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-565110' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:19:55.433702  495888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:19:55.433729  495888 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:19:55.433752  495888 ubuntu.go:190] setting up certificates
	I1009 20:19:55.433762  495888 provision.go:84] configureAuth start
	I1009 20:19:55.433835  495888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-565110
	I1009 20:19:55.451034  495888 provision.go:143] copyHostCerts
	I1009 20:19:55.451107  495888 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:19:55.451131  495888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:19:55.451208  495888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:19:55.451360  495888 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:19:55.451370  495888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:19:55.451400  495888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:19:55.451482  495888 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:19:55.451493  495888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:19:55.451520  495888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:19:55.451581  495888 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.embed-certs-565110 san=[127.0.0.1 192.168.76.2 embed-certs-565110 localhost minikube]
	I1009 20:19:55.723228  495888 provision.go:177] copyRemoteCerts
	I1009 20:19:55.723701  495888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:19:55.723756  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:55.745356  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:55.853673  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:19:55.872520  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 20:19:55.891414  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:19:55.911282  495888 provision.go:87] duration metric: took 477.503506ms to configureAuth
	I1009 20:19:55.911322  495888 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:19:55.911556  495888 config.go:182] Loaded profile config "embed-certs-565110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:19:55.911693  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:55.935681  495888 main.go:141] libmachine: Using SSH client type: native
	I1009 20:19:55.935991  495888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1009 20:19:55.936007  495888 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1009 20:19:52.674126  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:19:54.674242  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:19:56.675208  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	I1009 20:19:56.260763  495888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:19:56.260789  495888 machine.go:96] duration metric: took 4.383005849s to provisionDockerMachine
	I1009 20:19:56.260800  495888 start.go:294] postStartSetup for "embed-certs-565110" (driver="docker")
	I1009 20:19:56.260819  495888 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:19:56.260900  495888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:19:56.260943  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:56.286630  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:56.390555  495888 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:19:56.395007  495888 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:19:56.395034  495888 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:19:56.395044  495888 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:19:56.395097  495888 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:19:56.395176  495888 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:19:56.395272  495888 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:19:56.402958  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:19:56.424396  495888 start.go:297] duration metric: took 163.580707ms for postStartSetup
	I1009 20:19:56.424478  495888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:19:56.424533  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:56.447227  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:56.550726  495888 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:19:56.556179  495888 fix.go:57] duration metric: took 5.098447768s for fixHost
	I1009 20:19:56.556209  495888 start.go:84] releasing machines lock for "embed-certs-565110", held for 5.098501504s
	I1009 20:19:56.556286  495888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-565110
	I1009 20:19:56.573374  495888 ssh_runner.go:195] Run: cat /version.json
	I1009 20:19:56.573416  495888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:19:56.573438  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:56.573478  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:56.593761  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:56.624539  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:56.701349  495888 ssh_runner.go:195] Run: systemctl --version
	I1009 20:19:56.800207  495888 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:19:56.837954  495888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:19:56.842936  495888 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:19:56.843020  495888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:19:56.851187  495888 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 20:19:56.851220  495888 start.go:496] detecting cgroup driver to use...
	I1009 20:19:56.851267  495888 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 20:19:56.851338  495888 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:19:56.868899  495888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:19:56.882641  495888 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:19:56.882748  495888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:19:56.901981  495888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:19:56.922675  495888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:19:57.045263  495888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:19:57.164062  495888 docker.go:234] disabling docker service ...
	I1009 20:19:57.164140  495888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:19:57.182535  495888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:19:57.196529  495888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:19:57.316352  495888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:19:57.436860  495888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:19:57.451031  495888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:19:57.466163  495888 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:19:57.466305  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.475527  495888 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:19:57.475677  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.485065  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.494276  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.503522  495888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:19:57.512068  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.527270  495888 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.536150  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.547538  495888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:19:57.555776  495888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:19:57.563474  495888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:19:57.687781  495888 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:19:57.832964  495888 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:19:57.833043  495888 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:19:57.837082  495888 start.go:564] Will wait 60s for crictl version
	I1009 20:19:57.837268  495888 ssh_runner.go:195] Run: which crictl
	I1009 20:19:57.841002  495888 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:19:57.884119  495888 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:19:57.884206  495888 ssh_runner.go:195] Run: crio --version
	I1009 20:19:57.920601  495888 ssh_runner.go:195] Run: crio --version
	I1009 20:19:57.953231  495888 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 20:19:57.956094  495888 cli_runner.go:164] Run: docker network inspect embed-certs-565110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:19:57.973183  495888 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 20:19:57.977379  495888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:19:57.987566  495888 kubeadm.go:883] updating cluster {Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:19:57.987690  495888 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:19:57.987753  495888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:19:58.034743  495888 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:19:58.034768  495888 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:19:58.034837  495888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:19:58.063612  495888 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:19:58.063641  495888 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:19:58.063649  495888 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 20:19:58.063757  495888 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-565110 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:19:58.063850  495888 ssh_runner.go:195] Run: crio config
	I1009 20:19:58.119226  495888 cni.go:84] Creating CNI manager for ""
	I1009 20:19:58.119250  495888 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:19:58.119270  495888 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 20:19:58.119317  495888 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-565110 NodeName:embed-certs-565110 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:19:58.119477  495888 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-565110"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:19:58.119554  495888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:19:58.127994  495888 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:19:58.128078  495888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:19:58.136084  495888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1009 20:19:58.150168  495888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:19:58.164940  495888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1009 20:19:58.181309  495888 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:19:58.185366  495888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:19:58.195602  495888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:19:58.316882  495888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:19:58.332912  495888 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110 for IP: 192.168.76.2
	I1009 20:19:58.332938  495888 certs.go:195] generating shared ca certs ...
	I1009 20:19:58.332955  495888 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:58.333097  495888 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:19:58.333194  495888 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:19:58.333206  495888 certs.go:257] generating profile certs ...
	I1009 20:19:58.333308  495888 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/client.key
	I1009 20:19:58.333377  495888 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key.e7b9ab9d
	I1009 20:19:58.333427  495888 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.key
	I1009 20:19:58.333542  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:19:58.333574  495888 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:19:58.333587  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:19:58.333618  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:19:58.333645  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:19:58.333674  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:19:58.333723  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:19:58.334393  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:19:58.356891  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:19:58.378429  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:19:58.402388  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:19:58.429843  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1009 20:19:58.457145  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:19:58.482912  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:19:58.511578  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:19:58.532342  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:19:58.560879  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:19:58.585808  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:19:58.606843  495888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:19:58.621525  495888 ssh_runner.go:195] Run: openssl version
	I1009 20:19:58.628148  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:19:58.637529  495888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:19:58.641561  495888 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:19:58.641652  495888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:19:58.687261  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:19:58.695792  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:19:58.704978  495888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:19:58.709478  495888 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:19:58.709569  495888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:19:58.751071  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:19:58.759246  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:19:58.767814  495888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:19:58.772140  495888 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:19:58.772208  495888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:19:58.813601  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:19:58.821600  495888 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:19:58.825585  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:19:58.867122  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:19:58.915221  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:19:58.961581  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:19:59.019501  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:19:59.064706  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:19:59.118556  495888 kubeadm.go:400] StartCluster: {Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:19:59.118710  495888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:19:59.118804  495888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:19:59.210857  495888 cri.go:89] found id: "1de1928d9c10a7383f82f9d07f373a124ba301e004ce8acd88dd8a940cd3c874"
	I1009 20:19:59.210931  495888 cri.go:89] found id: "263af593d94482c92965e6f0511548fd1ccf9f2292e732c23158498a550ac2a4"
	I1009 20:19:59.210953  495888 cri.go:89] found id: "e15b99435508a3068f9f9d4d692dd1bd7f56391601b5b0179b6642e79aa3078f"
	I1009 20:19:59.210979  495888 cri.go:89] found id: "6d66a1c644fe699013f3d024b65f4dfa2c5f6bb2e344eef4ab51199503d6bb1f"
	I1009 20:19:59.211008  495888 cri.go:89] found id: ""
	I1009 20:19:59.211087  495888 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 20:19:59.238031  495888 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:19:59Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:19:59.238192  495888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:19:59.251518  495888 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 20:19:59.251585  495888 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 20:19:59.251666  495888 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:19:59.267653  495888 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:19:59.268282  495888 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-565110" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:19:59.268619  495888 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-294150/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-565110" cluster setting kubeconfig missing "embed-certs-565110" context setting]
	I1009 20:19:59.269134  495888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:59.270821  495888 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:19:59.290193  495888 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1009 20:19:59.290271  495888 kubeadm.go:601] duration metric: took 38.666826ms to restartPrimaryControlPlane
	I1009 20:19:59.290297  495888 kubeadm.go:402] duration metric: took 171.753193ms to StartCluster
	I1009 20:19:59.290339  495888 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:59.290426  495888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:19:59.292269  495888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:59.297424  495888 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:19:59.297656  495888 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:19:59.301070  495888 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-565110"
	I1009 20:19:59.301153  495888 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-565110"
	W1009 20:19:59.301182  495888 addons.go:247] addon storage-provisioner should already be in state true
	I1009 20:19:59.301226  495888 host.go:66] Checking if "embed-certs-565110" exists ...
	I1009 20:19:59.301774  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:59.304957  495888 out.go:179] * Verifying Kubernetes components...
	I1009 20:19:59.308175  495888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:19:59.311519  495888 addons.go:69] Setting default-storageclass=true in profile "embed-certs-565110"
	I1009 20:19:59.311558  495888 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-565110"
	I1009 20:19:59.311891  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:59.321256  495888 addons.go:69] Setting dashboard=true in profile "embed-certs-565110"
	I1009 20:19:59.321286  495888 addons.go:238] Setting addon dashboard=true in "embed-certs-565110"
	W1009 20:19:59.321295  495888 addons.go:247] addon dashboard should already be in state true
	I1009 20:19:59.321329  495888 host.go:66] Checking if "embed-certs-565110" exists ...
	I1009 20:19:59.321807  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:59.297901  495888 config.go:182] Loaded profile config "embed-certs-565110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:19:59.346629  495888 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:19:59.351650  495888 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:19:59.351675  495888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:19:59.351737  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:59.370094  495888 addons.go:238] Setting addon default-storageclass=true in "embed-certs-565110"
	W1009 20:19:59.370120  495888 addons.go:247] addon default-storageclass should already be in state true
	I1009 20:19:59.370146  495888 host.go:66] Checking if "embed-certs-565110" exists ...
	I1009 20:19:59.370575  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:59.387047  495888 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 20:19:59.390364  495888 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 20:19:59.396710  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 20:19:59.396746  495888 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 20:19:59.396832  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:59.429347  495888 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:19:59.429370  495888 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:19:59.429437  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:59.430849  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:59.464220  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:59.473433  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:59.694365  495888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:19:59.717524  495888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:19:59.840086  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 20:19:59.840111  495888 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 20:19:59.864918  495888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:19:59.880095  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 20:19:59.880122  495888 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 20:19:59.900442  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 20:19:59.900468  495888 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 20:19:59.917184  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 20:19:59.917207  495888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 20:19:59.939131  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 20:19:59.939158  495888 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 20:19:59.990478  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 20:19:59.990505  495888 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 20:20:00.111657  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 20:20:00.111680  495888 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 20:20:00.389825  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 20:20:00.389849  495888 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 20:20:00.555191  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 20:20:00.555224  495888 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 20:20:00.588057  495888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1009 20:19:59.174811  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:01.175375  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	I1009 20:20:05.769604  495888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.075202514s)
	I1009 20:20:05.769670  495888 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.052119796s)
	I1009 20:20:05.769700  495888 node_ready.go:35] waiting up to 6m0s for node "embed-certs-565110" to be "Ready" ...
	I1009 20:20:05.770038  495888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.905095554s)
	I1009 20:20:05.812828  495888 node_ready.go:49] node "embed-certs-565110" is "Ready"
	I1009 20:20:05.812865  495888 node_ready.go:38] duration metric: took 43.143299ms for node "embed-certs-565110" to be "Ready" ...
	I1009 20:20:05.812881  495888 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:20:05.812944  495888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:05.956969  495888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.368861055s)
	I1009 20:20:05.957203  495888 api_server.go:72] duration metric: took 6.656317655s to wait for apiserver process to appear ...
	I1009 20:20:05.957223  495888 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:20:05.957275  495888 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:20:05.960249  495888 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-565110 addons enable metrics-server
	
	I1009 20:20:05.963094  495888 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1009 20:20:05.966069  495888 addons.go:514] duration metric: took 6.668415407s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1009 20:20:05.968487  495888 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1009 20:20:05.969658  495888 api_server.go:141] control plane version: v1.34.1
	I1009 20:20:05.969700  495888 api_server.go:131] duration metric: took 12.429949ms to wait for apiserver health ...
	I1009 20:20:05.969710  495888 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:20:05.973374  495888 system_pods.go:59] 8 kube-system pods found
	I1009 20:20:05.973419  495888 system_pods.go:61] "coredns-66bc5c9577-zmqwp" [ff3de144-4c77-4486-be1e-ab88492e6a18] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:05.973429  495888 system_pods.go:61] "etcd-embed-certs-565110" [4ad4c426-96dc-4bd7-bf86-efc6658f3526] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:20:05.973434  495888 system_pods.go:61] "kindnet-mjfwz" [f079f818-4d35-4673-ab85-6b2fe322c9f9] Running
	I1009 20:20:05.973441  495888 system_pods.go:61] "kube-apiserver-embed-certs-565110" [5a497a15-f487-4c78-bf3e-a53c6d9f83db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:20:05.973449  495888 system_pods.go:61] "kube-controller-manager-embed-certs-565110" [7460b871-81b4-49ff-bad1-b30126a8635c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:20:05.973454  495888 system_pods.go:61] "kube-proxy-bhwvw" [f9d0b727-064f-4a1c-88e2-e238e5f43c4b] Running
	I1009 20:20:05.973470  495888 system_pods.go:61] "kube-scheduler-embed-certs-565110" [f706c945-9f4f-4f6d-83f8-c6cddb3ff41d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:20:05.973474  495888 system_pods.go:61] "storage-provisioner" [9811b3ef-6b1c-42ea-a8c8-bdf0028bd024] Running
	I1009 20:20:05.973480  495888 system_pods.go:74] duration metric: took 3.763873ms to wait for pod list to return data ...
	I1009 20:20:05.973491  495888 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:20:05.976144  495888 default_sa.go:45] found service account: "default"
	I1009 20:20:05.976166  495888 default_sa.go:55] duration metric: took 2.669804ms for default service account to be created ...
	I1009 20:20:05.976174  495888 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:20:05.980886  495888 system_pods.go:86] 8 kube-system pods found
	I1009 20:20:05.980930  495888 system_pods.go:89] "coredns-66bc5c9577-zmqwp" [ff3de144-4c77-4486-be1e-ab88492e6a18] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:05.980940  495888 system_pods.go:89] "etcd-embed-certs-565110" [4ad4c426-96dc-4bd7-bf86-efc6658f3526] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:20:05.980946  495888 system_pods.go:89] "kindnet-mjfwz" [f079f818-4d35-4673-ab85-6b2fe322c9f9] Running
	I1009 20:20:05.980955  495888 system_pods.go:89] "kube-apiserver-embed-certs-565110" [5a497a15-f487-4c78-bf3e-a53c6d9f83db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:20:05.980963  495888 system_pods.go:89] "kube-controller-manager-embed-certs-565110" [7460b871-81b4-49ff-bad1-b30126a8635c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:20:05.980968  495888 system_pods.go:89] "kube-proxy-bhwvw" [f9d0b727-064f-4a1c-88e2-e238e5f43c4b] Running
	I1009 20:20:05.980992  495888 system_pods.go:89] "kube-scheduler-embed-certs-565110" [f706c945-9f4f-4f6d-83f8-c6cddb3ff41d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:20:05.981000  495888 system_pods.go:89] "storage-provisioner" [9811b3ef-6b1c-42ea-a8c8-bdf0028bd024] Running
	I1009 20:20:05.981007  495888 system_pods.go:126] duration metric: took 4.827699ms to wait for k8s-apps to be running ...
	I1009 20:20:05.981021  495888 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:20:05.981085  495888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:20:05.997748  495888 system_svc.go:56] duration metric: took 16.717209ms WaitForService to wait for kubelet
	I1009 20:20:05.997790  495888 kubeadm.go:586] duration metric: took 6.696906359s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:20:05.997808  495888 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:20:06.009632  495888 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:20:06.009684  495888 node_conditions.go:123] node cpu capacity is 2
	I1009 20:20:06.009700  495888 node_conditions.go:105] duration metric: took 11.886647ms to run NodePressure ...
	I1009 20:20:06.009715  495888 start.go:242] waiting for startup goroutines ...
	I1009 20:20:06.009723  495888 start.go:247] waiting for cluster config update ...
	I1009 20:20:06.009735  495888 start.go:256] writing updated cluster config ...
	I1009 20:20:06.010128  495888 ssh_runner.go:195] Run: rm -f paused
	I1009 20:20:06.015468  495888 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:20:06.020318  495888 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zmqwp" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 20:20:03.675557  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:06.175568  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:08.026794  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:10.028306  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:08.674193  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:11.174725  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:12.030211  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:14.031972  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:13.174928  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:15.674695  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:16.527708  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:19.026340  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:17.675546  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:20.174799  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:21.526662  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:24.026394  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:26.026883  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:22.674016  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:24.674813  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:28.027437  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:30.082635  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:27.174625  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:29.675082  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	I1009 20:20:32.174806  492745 node_ready.go:49] node "default-k8s-diff-port-417984" is "Ready"
	I1009 20:20:32.174843  492745 node_ready.go:38] duration metric: took 41.503651759s for node "default-k8s-diff-port-417984" to be "Ready" ...
	I1009 20:20:32.174857  492745 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:20:32.174913  492745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:32.190088  492745 api_server.go:72] duration metric: took 42.367482347s to wait for apiserver process to appear ...
	I1009 20:20:32.190112  492745 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:20:32.190133  492745 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1009 20:20:32.198669  492745 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1009 20:20:32.199868  492745 api_server.go:141] control plane version: v1.34.1
	I1009 20:20:32.199893  492745 api_server.go:131] duration metric: took 9.773485ms to wait for apiserver health ...
	I1009 20:20:32.199901  492745 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:20:32.205780  492745 system_pods.go:59] 8 kube-system pods found
	I1009 20:20:32.205895  492745 system_pods.go:61] "coredns-66bc5c9577-4c2vb" [1372d4eb-13df-43ba-add1-18330c9c110d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:32.205919  492745 system_pods.go:61] "etcd-default-k8s-diff-port-417984" [2f46d319-463a-4bf1-b9f0-33d017fe17c5] Running
	I1009 20:20:32.205965  492745 system_pods.go:61] "kindnet-s57gp" [c69cde96-0e11-4f41-a715-961981d36066] Running
	I1009 20:20:32.205987  492745 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-417984" [fff706cc-3c18-400c-9fb7-10cec1723bc7] Running
	I1009 20:20:32.206011  492745 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-417984" [b8ecf531-e830-4a99-abcc-1fc8175c1598] Running
	I1009 20:20:32.206050  492745 system_pods.go:61] "kube-proxy-jnlzf" [c888f2c2-aaea-43d1-b81a-fe2762b4f733] Running
	I1009 20:20:32.206087  492745 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-417984" [27737a80-8846-4a8f-b4c6-2845ddca3cca] Running
	I1009 20:20:32.206111  492745 system_pods.go:61] "storage-provisioner" [35085697-b4c2-4265-a1eb-2ced25791f19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:20:32.206136  492745 system_pods.go:74] duration metric: took 6.227019ms to wait for pod list to return data ...
	I1009 20:20:32.206167  492745 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:20:32.209460  492745 default_sa.go:45] found service account: "default"
	I1009 20:20:32.209485  492745 default_sa.go:55] duration metric: took 3.292588ms for default service account to be created ...
	I1009 20:20:32.209494  492745 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:20:32.213461  492745 system_pods.go:86] 8 kube-system pods found
	I1009 20:20:32.213493  492745 system_pods.go:89] "coredns-66bc5c9577-4c2vb" [1372d4eb-13df-43ba-add1-18330c9c110d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:32.213501  492745 system_pods.go:89] "etcd-default-k8s-diff-port-417984" [2f46d319-463a-4bf1-b9f0-33d017fe17c5] Running
	I1009 20:20:32.213507  492745 system_pods.go:89] "kindnet-s57gp" [c69cde96-0e11-4f41-a715-961981d36066] Running
	I1009 20:20:32.213512  492745 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417984" [fff706cc-3c18-400c-9fb7-10cec1723bc7] Running
	I1009 20:20:32.213516  492745 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417984" [b8ecf531-e830-4a99-abcc-1fc8175c1598] Running
	I1009 20:20:32.213521  492745 system_pods.go:89] "kube-proxy-jnlzf" [c888f2c2-aaea-43d1-b81a-fe2762b4f733] Running
	I1009 20:20:32.213525  492745 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417984" [27737a80-8846-4a8f-b4c6-2845ddca3cca] Running
	I1009 20:20:32.213530  492745 system_pods.go:89] "storage-provisioner" [35085697-b4c2-4265-a1eb-2ced25791f19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:20:32.213551  492745 retry.go:31] will retry after 208.993801ms: missing components: kube-dns
	I1009 20:20:32.426651  492745 system_pods.go:86] 8 kube-system pods found
	I1009 20:20:32.426688  492745 system_pods.go:89] "coredns-66bc5c9577-4c2vb" [1372d4eb-13df-43ba-add1-18330c9c110d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:32.426697  492745 system_pods.go:89] "etcd-default-k8s-diff-port-417984" [2f46d319-463a-4bf1-b9f0-33d017fe17c5] Running
	I1009 20:20:32.426706  492745 system_pods.go:89] "kindnet-s57gp" [c69cde96-0e11-4f41-a715-961981d36066] Running
	I1009 20:20:32.426710  492745 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417984" [fff706cc-3c18-400c-9fb7-10cec1723bc7] Running
	I1009 20:20:32.426715  492745 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417984" [b8ecf531-e830-4a99-abcc-1fc8175c1598] Running
	I1009 20:20:32.426720  492745 system_pods.go:89] "kube-proxy-jnlzf" [c888f2c2-aaea-43d1-b81a-fe2762b4f733] Running
	I1009 20:20:32.426724  492745 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417984" [27737a80-8846-4a8f-b4c6-2845ddca3cca] Running
	I1009 20:20:32.426729  492745 system_pods.go:89] "storage-provisioner" [35085697-b4c2-4265-a1eb-2ced25791f19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:20:32.426744  492745 retry.go:31] will retry after 247.744501ms: missing components: kube-dns
	I1009 20:20:32.678852  492745 system_pods.go:86] 8 kube-system pods found
	I1009 20:20:32.678888  492745 system_pods.go:89] "coredns-66bc5c9577-4c2vb" [1372d4eb-13df-43ba-add1-18330c9c110d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:32.678896  492745 system_pods.go:89] "etcd-default-k8s-diff-port-417984" [2f46d319-463a-4bf1-b9f0-33d017fe17c5] Running
	I1009 20:20:32.678902  492745 system_pods.go:89] "kindnet-s57gp" [c69cde96-0e11-4f41-a715-961981d36066] Running
	I1009 20:20:32.678906  492745 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417984" [fff706cc-3c18-400c-9fb7-10cec1723bc7] Running
	I1009 20:20:32.678910  492745 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417984" [b8ecf531-e830-4a99-abcc-1fc8175c1598] Running
	I1009 20:20:32.678914  492745 system_pods.go:89] "kube-proxy-jnlzf" [c888f2c2-aaea-43d1-b81a-fe2762b4f733] Running
	I1009 20:20:32.678918  492745 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417984" [27737a80-8846-4a8f-b4c6-2845ddca3cca] Running
	I1009 20:20:32.678928  492745 system_pods.go:89] "storage-provisioner" [35085697-b4c2-4265-a1eb-2ced25791f19] Running
	I1009 20:20:32.678941  492745 system_pods.go:126] duration metric: took 469.440984ms to wait for k8s-apps to be running ...
	I1009 20:20:32.678952  492745 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:20:32.679021  492745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:20:32.695608  492745 system_svc.go:56] duration metric: took 16.635802ms WaitForService to wait for kubelet
	I1009 20:20:32.695641  492745 kubeadm.go:586] duration metric: took 42.873046641s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:20:32.695745  492745 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:20:32.699281  492745 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:20:32.699327  492745 node_conditions.go:123] node cpu capacity is 2
	I1009 20:20:32.699341  492745 node_conditions.go:105] duration metric: took 3.588625ms to run NodePressure ...
	I1009 20:20:32.699353  492745 start.go:242] waiting for startup goroutines ...
	I1009 20:20:32.699362  492745 start.go:247] waiting for cluster config update ...
	I1009 20:20:32.699378  492745 start.go:256] writing updated cluster config ...
	I1009 20:20:32.699753  492745 ssh_runner.go:195] Run: rm -f paused
	I1009 20:20:32.704321  492745 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:20:32.708386  492745 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4c2vb" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.714486  492745 pod_ready.go:94] pod "coredns-66bc5c9577-4c2vb" is "Ready"
	I1009 20:20:33.714516  492745 pod_ready.go:86] duration metric: took 1.006106251s for pod "coredns-66bc5c9577-4c2vb" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.717510  492745 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.724030  492745 pod_ready.go:94] pod "etcd-default-k8s-diff-port-417984" is "Ready"
	I1009 20:20:33.724067  492745 pod_ready.go:86] duration metric: took 6.523752ms for pod "etcd-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.727654  492745 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.732867  492745 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-417984" is "Ready"
	I1009 20:20:33.732899  492745 pod_ready.go:86] duration metric: took 5.219538ms for pod "kube-apiserver-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.735435  492745 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.914645  492745 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-417984" is "Ready"
	I1009 20:20:33.914725  492745 pod_ready.go:86] duration metric: took 179.260924ms for pod "kube-controller-manager-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:34.112850  492745 pod_ready.go:83] waiting for pod "kube-proxy-jnlzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:34.512679  492745 pod_ready.go:94] pod "kube-proxy-jnlzf" is "Ready"
	I1009 20:20:34.512722  492745 pod_ready.go:86] duration metric: took 399.843804ms for pod "kube-proxy-jnlzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:34.713169  492745 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:35.113508  492745 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-417984" is "Ready"
	I1009 20:20:35.113547  492745 pod_ready.go:86] duration metric: took 400.349632ms for pod "kube-scheduler-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:35.113560  492745 pod_ready.go:40] duration metric: took 2.409163956s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:20:35.180770  492745 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:20:35.185314  492745 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-417984" cluster and "default" namespace by default
	W1009 20:20:32.526518  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:35.026457  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:37.028006  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	I1009 20:20:39.026861  495888 pod_ready.go:94] pod "coredns-66bc5c9577-zmqwp" is "Ready"
	I1009 20:20:39.026887  495888 pod_ready.go:86] duration metric: took 33.006531676s for pod "coredns-66bc5c9577-zmqwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.029834  495888 pod_ready.go:83] waiting for pod "etcd-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.039610  495888 pod_ready.go:94] pod "etcd-embed-certs-565110" is "Ready"
	I1009 20:20:39.039636  495888 pod_ready.go:86] duration metric: took 9.73968ms for pod "etcd-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.042389  495888 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.047521  495888 pod_ready.go:94] pod "kube-apiserver-embed-certs-565110" is "Ready"
	I1009 20:20:39.047551  495888 pod_ready.go:86] duration metric: took 5.132432ms for pod "kube-apiserver-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.050305  495888 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.224004  495888 pod_ready.go:94] pod "kube-controller-manager-embed-certs-565110" is "Ready"
	I1009 20:20:39.224037  495888 pod_ready.go:86] duration metric: took 173.70233ms for pod "kube-controller-manager-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.424038  495888 pod_ready.go:83] waiting for pod "kube-proxy-bhwvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.823381  495888 pod_ready.go:94] pod "kube-proxy-bhwvw" is "Ready"
	I1009 20:20:39.823451  495888 pod_ready.go:86] duration metric: took 399.38654ms for pod "kube-proxy-bhwvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:40.043782  495888 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:40.424476  495888 pod_ready.go:94] pod "kube-scheduler-embed-certs-565110" is "Ready"
	I1009 20:20:40.424500  495888 pod_ready.go:86] duration metric: took 380.690278ms for pod "kube-scheduler-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:40.424512  495888 pod_ready.go:40] duration metric: took 34.409013666s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:20:40.482252  495888 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:20:40.485424  495888 out.go:179] * Done! kubectl is now configured to use "embed-certs-565110" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 20:20:32 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:32.488283123Z" level=info msg="Created container 214aceb40a85931f9890398213c4270205bfffd2313663452cbe8e168ea0dbcb: kube-system/coredns-66bc5c9577-4c2vb/coredns" id=f5112be5-6eb4-4a49-ba8e-37aa14918a54 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:20:32 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:32.490746048Z" level=info msg="Starting container: 214aceb40a85931f9890398213c4270205bfffd2313663452cbe8e168ea0dbcb" id=a2d2fb34-1d1d-451a-8c0e-e61480d4e4b5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:20:32 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:32.495598632Z" level=info msg="Started container" PID=1776 containerID=214aceb40a85931f9890398213c4270205bfffd2313663452cbe8e168ea0dbcb description=kube-system/coredns-66bc5c9577-4c2vb/coredns id=a2d2fb34-1d1d-451a-8c0e-e61480d4e4b5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dc3aba69fa1c0e96532238835cf5c9472460d80fc0c670811ba897d5ae2dbc7
	Oct 09 20:20:35 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:35.732728578Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a8cd283d-9179-40ca-bce9-fea2d2796051 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:20:35 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:35.732808817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:20:35 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:35.738636594Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:378fec646c3729e02be3cced85279aa4bab1dc432d2363497857f0f3a95330ec UID:dab2e635-8f2d-4a44-9384-70a522687435 NetNS:/var/run/netns/1b8b8a96-a837-4251-a801-550ba428b8e0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000524850}] Aliases:map[]}"
	Oct 09 20:20:35 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:35.738686859Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 09 20:20:35 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:35.751659016Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:378fec646c3729e02be3cced85279aa4bab1dc432d2363497857f0f3a95330ec UID:dab2e635-8f2d-4a44-9384-70a522687435 NetNS:/var/run/netns/1b8b8a96-a837-4251-a801-550ba428b8e0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000524850}] Aliases:map[]}"
	Oct 09 20:20:35 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:35.751806825Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 09 20:20:35 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:35.755424292Z" level=info msg="Ran pod sandbox 378fec646c3729e02be3cced85279aa4bab1dc432d2363497857f0f3a95330ec with infra container: default/busybox/POD" id=a8cd283d-9179-40ca-bce9-fea2d2796051 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:20:35 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:35.75664807Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f2ab5b9e-3d52-4d05-a8d1-169b51940103 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:20:35 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:35.756913985Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f2ab5b9e-3d52-4d05-a8d1-169b51940103 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:20:35 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:35.757072206Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f2ab5b9e-3d52-4d05-a8d1-169b51940103 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:20:35 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:35.765277483Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5838a6ce-3e7e-4116-9f7c-a61a7ee4f6ca name=/runtime.v1.ImageService/PullImage
	Oct 09 20:20:35 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:35.769650586Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 09 20:20:37 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:37.736797163Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=5838a6ce-3e7e-4116-9f7c-a61a7ee4f6ca name=/runtime.v1.ImageService/PullImage
	Oct 09 20:20:37 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:37.737728966Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=894f0f86-3312-4003-804f-e5375427ee81 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:20:37 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:37.739491492Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b13df0bd-ea8e-404d-893c-e4c5cb9e5b82 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:20:37 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:37.746065502Z" level=info msg="Creating container: default/busybox/busybox" id=7f246ba7-f57e-4840-b93b-714e7b7f1e49 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:20:37 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:37.746885402Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:20:37 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:37.751724587Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:20:37 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:37.752477696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:20:37 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:37.772503368Z" level=info msg="Created container 5c9929d7a1715d7c821b675dc7637882fc71b3b93c3fade5b30bae941d8919d9: default/busybox/busybox" id=7f246ba7-f57e-4840-b93b-714e7b7f1e49 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:20:37 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:37.773563361Z" level=info msg="Starting container: 5c9929d7a1715d7c821b675dc7637882fc71b3b93c3fade5b30bae941d8919d9" id=e701ef72-7c12-41b4-ba83-7e6574b4fb00 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:20:37 default-k8s-diff-port-417984 crio[840]: time="2025-10-09T20:20:37.775675266Z" level=info msg="Started container" PID=1833 containerID=5c9929d7a1715d7c821b675dc7637882fc71b3b93c3fade5b30bae941d8919d9 description=default/busybox/busybox id=e701ef72-7c12-41b4-ba83-7e6574b4fb00 name=/runtime.v1.RuntimeService/StartContainer sandboxID=378fec646c3729e02be3cced85279aa4bab1dc432d2363497857f0f3a95330ec
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	5c9929d7a1715       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   378fec646c372       busybox                                                default
	214aceb40a859       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   3dc3aba69fa1c       coredns-66bc5c9577-4c2vb                               kube-system
	4c2105d2f9727       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   2871ec01c7dd5       storage-provisioner                                    kube-system
	a12fc6671250f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   3e67a673fe93f       kube-proxy-jnlzf                                       kube-system
	4f086a1212a76       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   98635902353e5       kindnet-s57gp                                          kube-system
	0a39eb9755560       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   777e4ebc7803c       kube-apiserver-default-k8s-diff-port-417984            kube-system
	994e3b8ee6088       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   b6619c8ae29e9       kube-scheduler-default-k8s-diff-port-417984            kube-system
	db7beef62609c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   852a5595e7017       kube-controller-manager-default-k8s-diff-port-417984   kube-system
	bcf07de9ec920       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   e32db64db7224       etcd-default-k8s-diff-port-417984                      kube-system
	
	
	==> coredns [214aceb40a85931f9890398213c4270205bfffd2313663452cbe8e168ea0dbcb] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34813 - 23756 "HINFO IN 2901852512699498660.2839228404087036968. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011851503s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-417984
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-417984
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=default-k8s-diff-port-417984
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T20_19_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 20:19:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-417984
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 20:20:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 20:20:46 +0000   Thu, 09 Oct 2025 20:19:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 20:20:46 +0000   Thu, 09 Oct 2025 20:19:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 20:20:46 +0000   Thu, 09 Oct 2025 20:19:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 20:20:46 +0000   Thu, 09 Oct 2025 20:20:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-417984
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 941117e976c74c738a538c24e37163de
	  System UUID:                47844709-b89d-494e-8261-a7f5aabcecf0
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-4c2vb                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     57s
	  kube-system                 etcd-default-k8s-diff-port-417984                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-s57gp                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-default-k8s-diff-port-417984             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-417984    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-jnlzf                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-default-k8s-diff-port-417984             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)  kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)  kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)  kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s                kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s                kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s                kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node default-k8s-diff-port-417984 event: Registered Node default-k8s-diff-port-417984 in Controller
	  Normal   NodeReady                15s                kubelet          Node default-k8s-diff-port-417984 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:50] overlayfs: idmapped layers are currently not supported
	[ +27.967875] overlayfs: idmapped layers are currently not supported
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:04] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:15] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:16] overlayfs: idmapped layers are currently not supported
	[ +23.810739] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:18] overlayfs: idmapped layers are currently not supported
	[ +26.082927] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:19] overlayfs: idmapped layers are currently not supported
	[ +21.956614] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [bcf07de9ec920298ffcb59bde3c22ad7aa9df7d84cb39bee3525d982d92371da] <==
	{"level":"warn","ts":"2025-10-09T20:19:39.748090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:39.805366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:39.822369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:39.867502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:39.891089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:39.913923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:39.945668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.022671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.032966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.084271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.151502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.163728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.227240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.252404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.277688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.320859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.346023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.377369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.413891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.428311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.470299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.511646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.531611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.578915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:19:40.724678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46322","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:20:46 up  3:03,  0 user,  load average: 3.73, 3.01, 2.16
	Linux default-k8s-diff-port-417984 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4f086a1212a7659a684f36e58fdb918883649283656b9ee483e022f3dbaa9007] <==
	I1009 20:19:51.512818       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:19:51.513253       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 20:19:51.513926       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:19:51.513991       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:19:51.514026       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:19:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:19:51.705637       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:19:51.705758       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:19:51.705805       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:19:51.706641       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 20:20:21.706358       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1009 20:20:21.706454       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 20:20:21.706606       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 20:20:21.708043       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1009 20:20:23.206943       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 20:20:23.207040       1 metrics.go:72] Registering metrics
	I1009 20:20:23.207132       1 controller.go:711] "Syncing nftables rules"
	I1009 20:20:31.712905       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 20:20:31.712961       1 main.go:301] handling current node
	I1009 20:20:41.706908       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 20:20:41.706955       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0a39eb97555602b4b094001e4e3a1900b0c7bb490791a84f8d8246b3ffb8e431] <==
	I1009 20:19:41.723125       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 20:19:41.719094       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 20:19:41.726109       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 20:19:41.793420       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:19:41.800772       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1009 20:19:41.821719       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:19:41.823883       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 20:19:41.828414       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1009 20:19:42.398603       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1009 20:19:42.407512       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1009 20:19:42.408333       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:19:43.187708       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:19:43.262671       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:19:43.410529       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1009 20:19:43.418841       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1009 20:19:43.420218       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 20:19:43.427201       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 20:19:43.558535       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 20:19:44.310695       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 20:19:44.328344       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1009 20:19:44.340063       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1009 20:19:48.657193       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:19:48.663639       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:19:49.316183       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1009 20:19:49.378683       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [db7beef62609cfad1d2625c624ba42e07b3fa9c852edd116c3529e09ee23ec60] <==
	I1009 20:19:48.599228       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 20:19:48.599320       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-417984"
	I1009 20:19:48.599368       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1009 20:19:48.599794       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 20:19:48.600479       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1009 20:19:48.600582       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 20:19:48.604376       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 20:19:48.604457       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1009 20:19:48.605794       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1009 20:19:48.606046       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 20:19:48.606112       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 20:19:48.607129       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1009 20:19:48.609259       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1009 20:19:48.610708       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 20:19:48.611162       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1009 20:19:48.611235       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1009 20:19:48.621141       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 20:19:48.621271       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1009 20:19:48.621342       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 20:19:48.621400       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 20:19:48.621436       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 20:19:48.621465       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 20:19:48.622036       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1009 20:19:48.638982       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-417984" podCIDRs=["10.244.0.0/24"]
	I1009 20:20:33.605091       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a12fc6671250f739436ad07272cfd28149ec3df5b61bcedb4ce7955f0e155dcf] <==
	I1009 20:19:51.668415       1 server_linux.go:53] "Using iptables proxy"
	I1009 20:19:51.755015       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 20:19:51.858595       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 20:19:51.858637       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 20:19:51.858721       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:19:51.900064       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:19:51.900384       1 server_linux.go:132] "Using iptables Proxier"
	I1009 20:19:51.912788       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:19:51.913556       1 server.go:527] "Version info" version="v1.34.1"
	I1009 20:19:51.913627       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:19:51.919578       1 config.go:200] "Starting service config controller"
	I1009 20:19:51.919666       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 20:19:51.919713       1 config.go:106] "Starting endpoint slice config controller"
	I1009 20:19:51.919741       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 20:19:51.919777       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 20:19:51.919803       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 20:19:51.920718       1 config.go:309] "Starting node config controller"
	I1009 20:19:51.924011       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 20:19:51.924094       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 20:19:52.020431       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 20:19:52.020471       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 20:19:52.020495       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [994e3b8ee6088e28d6cdda6b6c512248e96398b8443f5c9542c9db73b58e4d2f] <==
	I1009 20:19:40.554928       1 serving.go:386] Generated self-signed cert in-memory
	I1009 20:19:43.347879       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 20:19:43.347994       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:19:43.356611       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 20:19:43.356754       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:19:43.356859       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:19:43.356729       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 20:19:43.356917       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 20:19:43.356766       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:19:43.356935       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:19:43.356778       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 20:19:43.457843       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:19:43.457948       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 20:19:43.457845       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 20:19:49 default-k8s-diff-port-417984 kubelet[1349]: I1009 20:19:49.411027    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c888f2c2-aaea-43d1-b81a-fe2762b4f733-xtables-lock\") pod \"kube-proxy-jnlzf\" (UID: \"c888f2c2-aaea-43d1-b81a-fe2762b4f733\") " pod="kube-system/kube-proxy-jnlzf"
	Oct 09 20:19:49 default-k8s-diff-port-417984 kubelet[1349]: I1009 20:19:49.411071    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtbzf\" (UniqueName: \"kubernetes.io/projected/c888f2c2-aaea-43d1-b81a-fe2762b4f733-kube-api-access-gtbzf\") pod \"kube-proxy-jnlzf\" (UID: \"c888f2c2-aaea-43d1-b81a-fe2762b4f733\") " pod="kube-system/kube-proxy-jnlzf"
	Oct 09 20:19:49 default-k8s-diff-port-417984 kubelet[1349]: I1009 20:19:49.411150    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c69cde96-0e11-4f41-a715-961981d36066-cni-cfg\") pod \"kindnet-s57gp\" (UID: \"c69cde96-0e11-4f41-a715-961981d36066\") " pod="kube-system/kindnet-s57gp"
	Oct 09 20:19:49 default-k8s-diff-port-417984 kubelet[1349]: I1009 20:19:49.411203    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mlgb\" (UniqueName: \"kubernetes.io/projected/c69cde96-0e11-4f41-a715-961981d36066-kube-api-access-9mlgb\") pod \"kindnet-s57gp\" (UID: \"c69cde96-0e11-4f41-a715-961981d36066\") " pod="kube-system/kindnet-s57gp"
	Oct 09 20:19:49 default-k8s-diff-port-417984 kubelet[1349]: I1009 20:19:49.411247    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c888f2c2-aaea-43d1-b81a-fe2762b4f733-lib-modules\") pod \"kube-proxy-jnlzf\" (UID: \"c888f2c2-aaea-43d1-b81a-fe2762b4f733\") " pod="kube-system/kube-proxy-jnlzf"
	Oct 09 20:19:50 default-k8s-diff-port-417984 kubelet[1349]: E1009 20:19:50.526378    1349 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:19:50 default-k8s-diff-port-417984 kubelet[1349]: E1009 20:19:50.526420    1349 projected.go:196] Error preparing data for projected volume kube-api-access-9mlgb for pod kube-system/kindnet-s57gp: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:19:50 default-k8s-diff-port-417984 kubelet[1349]: E1009 20:19:50.526514    1349 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c69cde96-0e11-4f41-a715-961981d36066-kube-api-access-9mlgb podName:c69cde96-0e11-4f41-a715-961981d36066 nodeName:}" failed. No retries permitted until 2025-10-09 20:19:51.026485375 +0000 UTC m=+6.893852106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9mlgb" (UniqueName: "kubernetes.io/projected/c69cde96-0e11-4f41-a715-961981d36066-kube-api-access-9mlgb") pod "kindnet-s57gp" (UID: "c69cde96-0e11-4f41-a715-961981d36066") : failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:19:50 default-k8s-diff-port-417984 kubelet[1349]: E1009 20:19:50.532587    1349 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:19:50 default-k8s-diff-port-417984 kubelet[1349]: E1009 20:19:50.532623    1349 projected.go:196] Error preparing data for projected volume kube-api-access-gtbzf for pod kube-system/kube-proxy-jnlzf: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:19:50 default-k8s-diff-port-417984 kubelet[1349]: E1009 20:19:50.532690    1349 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c888f2c2-aaea-43d1-b81a-fe2762b4f733-kube-api-access-gtbzf podName:c888f2c2-aaea-43d1-b81a-fe2762b4f733 nodeName:}" failed. No retries permitted until 2025-10-09 20:19:51.03266917 +0000 UTC m=+6.900035901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gtbzf" (UniqueName: "kubernetes.io/projected/c888f2c2-aaea-43d1-b81a-fe2762b4f733-kube-api-access-gtbzf") pod "kube-proxy-jnlzf" (UID: "c888f2c2-aaea-43d1-b81a-fe2762b4f733") : failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:19:51 default-k8s-diff-port-417984 kubelet[1349]: I1009 20:19:51.034808    1349 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 09 20:19:51 default-k8s-diff-port-417984 kubelet[1349]: W1009 20:19:51.204069    1349 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/crio-98635902353e5a0d6f243084e1bdef746833fa1b0665e15e85f19ac444856e5e WatchSource:0}: Error finding container 98635902353e5a0d6f243084e1bdef746833fa1b0665e15e85f19ac444856e5e: Status 404 returned error can't find the container with id 98635902353e5a0d6f243084e1bdef746833fa1b0665e15e85f19ac444856e5e
	Oct 09 20:19:51 default-k8s-diff-port-417984 kubelet[1349]: I1009 20:19:51.451080    1349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jnlzf" podStartSLOduration=2.451056948 podStartE2EDuration="2.451056948s" podCreationTimestamp="2025-10-09 20:19:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:19:51.41998163 +0000 UTC m=+7.287348361" watchObservedRunningTime="2025-10-09 20:19:51.451056948 +0000 UTC m=+7.318423687"
	Oct 09 20:19:53 default-k8s-diff-port-417984 kubelet[1349]: I1009 20:19:53.250871    1349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-s57gp" podStartSLOduration=4.250855298 podStartE2EDuration="4.250855298s" podCreationTimestamp="2025-10-09 20:19:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:19:51.505899113 +0000 UTC m=+7.373265852" watchObservedRunningTime="2025-10-09 20:19:53.250855298 +0000 UTC m=+9.118222037"
	Oct 09 20:20:31 default-k8s-diff-port-417984 kubelet[1349]: I1009 20:20:31.788523    1349 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 09 20:20:32 default-k8s-diff-port-417984 kubelet[1349]: I1009 20:20:32.009843    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk2wv\" (UniqueName: \"kubernetes.io/projected/1372d4eb-13df-43ba-add1-18330c9c110d-kube-api-access-qk2wv\") pod \"coredns-66bc5c9577-4c2vb\" (UID: \"1372d4eb-13df-43ba-add1-18330c9c110d\") " pod="kube-system/coredns-66bc5c9577-4c2vb"
	Oct 09 20:20:32 default-k8s-diff-port-417984 kubelet[1349]: I1009 20:20:32.009992    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/35085697-b4c2-4265-a1eb-2ced25791f19-tmp\") pod \"storage-provisioner\" (UID: \"35085697-b4c2-4265-a1eb-2ced25791f19\") " pod="kube-system/storage-provisioner"
	Oct 09 20:20:32 default-k8s-diff-port-417984 kubelet[1349]: I1009 20:20:32.010058    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggh8h\" (UniqueName: \"kubernetes.io/projected/35085697-b4c2-4265-a1eb-2ced25791f19-kube-api-access-ggh8h\") pod \"storage-provisioner\" (UID: \"35085697-b4c2-4265-a1eb-2ced25791f19\") " pod="kube-system/storage-provisioner"
	Oct 09 20:20:32 default-k8s-diff-port-417984 kubelet[1349]: I1009 20:20:32.010109    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1372d4eb-13df-43ba-add1-18330c9c110d-config-volume\") pod \"coredns-66bc5c9577-4c2vb\" (UID: \"1372d4eb-13df-43ba-add1-18330c9c110d\") " pod="kube-system/coredns-66bc5c9577-4c2vb"
	Oct 09 20:20:32 default-k8s-diff-port-417984 kubelet[1349]: W1009 20:20:32.451203    1349 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/crio-3dc3aba69fa1c0e96532238835cf5c9472460d80fc0c670811ba897d5ae2dbc7 WatchSource:0}: Error finding container 3dc3aba69fa1c0e96532238835cf5c9472460d80fc0c670811ba897d5ae2dbc7: Status 404 returned error can't find the container with id 3dc3aba69fa1c0e96532238835cf5c9472460d80fc0c670811ba897d5ae2dbc7
	Oct 09 20:20:32 default-k8s-diff-port-417984 kubelet[1349]: I1009 20:20:32.585245    1349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4c2vb" podStartSLOduration=43.585225197 podStartE2EDuration="43.585225197s" podCreationTimestamp="2025-10-09 20:19:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:20:32.567222123 +0000 UTC m=+48.434588870" watchObservedRunningTime="2025-10-09 20:20:32.585225197 +0000 UTC m=+48.452591936"
	Oct 09 20:20:33 default-k8s-diff-port-417984 kubelet[1349]: I1009 20:20:33.568461    1349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.568441066 podStartE2EDuration="42.568441066s" podCreationTimestamp="2025-10-09 20:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:20:32.586064601 +0000 UTC m=+48.453431340" watchObservedRunningTime="2025-10-09 20:20:33.568441066 +0000 UTC m=+49.435807805"
	Oct 09 20:20:35 default-k8s-diff-port-417984 kubelet[1349]: I1009 20:20:35.535820    1349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q62lx\" (UniqueName: \"kubernetes.io/projected/dab2e635-8f2d-4a44-9384-70a522687435-kube-api-access-q62lx\") pod \"busybox\" (UID: \"dab2e635-8f2d-4a44-9384-70a522687435\") " pod="default/busybox"
	Oct 09 20:20:38 default-k8s-diff-port-417984 kubelet[1349]: I1009 20:20:38.595153    1349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.6141483110000001 podStartE2EDuration="3.595125732s" podCreationTimestamp="2025-10-09 20:20:35 +0000 UTC" firstStartedPulling="2025-10-09 20:20:35.757680321 +0000 UTC m=+51.625047060" lastFinishedPulling="2025-10-09 20:20:37.73865775 +0000 UTC m=+53.606024481" observedRunningTime="2025-10-09 20:20:38.594570057 +0000 UTC m=+54.461936788" watchObservedRunningTime="2025-10-09 20:20:38.595125732 +0000 UTC m=+54.462492463"
	
	
	==> storage-provisioner [4c2105d2f9727a9520383a9181b3135b1ee9491079d4df0f6ac0005ee55ec76c] <==
	I1009 20:20:32.229769       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:20:32.242420       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:20:32.242503       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 20:20:32.245053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:32.254363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:20:32.254540       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:20:32.254748       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-417984_198c9fc3-c115-470d-9af9-9cf3b9e159a1!
	I1009 20:20:32.256125       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3c4c04ee-1793-46fc-b5b5-7f3b1c4ca9ba", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-417984_198c9fc3-c115-470d-9af9-9cf3b9e159a1 became leader
	W1009 20:20:32.267817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:32.273782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:20:32.355744       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-417984_198c9fc3-c115-470d-9af9-9cf3b9e159a1!
	W1009 20:20:34.276698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:34.281320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:36.283961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:36.288665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:38.291963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:38.300999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:40.304856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:40.309625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:42.313547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:42.318722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:44.322618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:44.328870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:46.332625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:46.337770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-417984 -n default-k8s-diff-port-417984
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-417984 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-565110 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-565110 --alsologtostderr -v=1: exit status 80 (1.72256714s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-565110 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 20:20:53.283211  499100 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:20:53.283397  499100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:20:53.283424  499100 out.go:374] Setting ErrFile to fd 2...
	I1009 20:20:53.283444  499100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:20:53.285348  499100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:20:53.285717  499100 out.go:368] Setting JSON to false
	I1009 20:20:53.285786  499100 mustload.go:65] Loading cluster: embed-certs-565110
	I1009 20:20:53.286245  499100 config.go:182] Loaded profile config "embed-certs-565110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:20:53.286776  499100 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:20:53.308658  499100 host.go:66] Checking if "embed-certs-565110" exists ...
	I1009 20:20:53.308985  499100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:20:53.367041  499100 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-09 20:20:53.357965823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:20:53.367696  499100 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-565110 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1009 20:20:53.371243  499100 out.go:179] * Pausing node embed-certs-565110 ... 
	I1009 20:20:53.374124  499100 host.go:66] Checking if "embed-certs-565110" exists ...
	I1009 20:20:53.374486  499100 ssh_runner.go:195] Run: systemctl --version
	I1009 20:20:53.374544  499100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:20:53.394737  499100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:20:53.498013  499100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:20:53.511782  499100 pause.go:52] kubelet running: true
	I1009 20:20:53.511936  499100 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:20:53.741944  499100 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:20:53.742033  499100 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:20:53.809921  499100 cri.go:89] found id: "0f270941e80ca04066ea9be417daf5c6ce5c2ec0888d5bfff2efb8528aeb3c92"
	I1009 20:20:53.809944  499100 cri.go:89] found id: "6dd3b6f8859b6b73158b023597467ffd3bfbf74dba8207996ffb59ba32b783e5"
	I1009 20:20:53.809949  499100 cri.go:89] found id: "19c60abb724c168299c8076033b87385f420db683dd0f2474250da6b74aaf169"
	I1009 20:20:53.809953  499100 cri.go:89] found id: "3717764bae0d2e9c480c451663d8436220a28e339f7ea5f728f760e6db2361d2"
	I1009 20:20:53.809957  499100 cri.go:89] found id: "5bcf7f81c448e41e559806602e3f3a1d94582cbf78df0ab117caa5f14d6ba76a"
	I1009 20:20:53.809961  499100 cri.go:89] found id: "1de1928d9c10a7383f82f9d07f373a124ba301e004ce8acd88dd8a940cd3c874"
	I1009 20:20:53.809964  499100 cri.go:89] found id: "263af593d94482c92965e6f0511548fd1ccf9f2292e732c23158498a550ac2a4"
	I1009 20:20:53.809968  499100 cri.go:89] found id: "e15b99435508a3068f9f9d4d692dd1bd7f56391601b5b0179b6642e79aa3078f"
	I1009 20:20:53.809971  499100 cri.go:89] found id: "6d66a1c644fe699013f3d024b65f4dfa2c5f6bb2e344eef4ab51199503d6bb1f"
	I1009 20:20:53.809978  499100 cri.go:89] found id: "5a38898751c3190370fb093a21e09fb35270396f2335f7cf298f3ffeb676eab4"
	I1009 20:20:53.809981  499100 cri.go:89] found id: "dcee20808a0ab7b88a286f7a9fa5402833491c3468c0d98c6e6e41a4d387aeca"
	I1009 20:20:53.809984  499100 cri.go:89] found id: ""
	I1009 20:20:53.810034  499100 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:20:53.821339  499100 retry.go:31] will retry after 236.642511ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:20:53Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:20:54.058959  499100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:20:54.073362  499100 pause.go:52] kubelet running: false
	I1009 20:20:54.073435  499100 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:20:54.243607  499100 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:20:54.243686  499100 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:20:54.311133  499100 cri.go:89] found id: "0f270941e80ca04066ea9be417daf5c6ce5c2ec0888d5bfff2efb8528aeb3c92"
	I1009 20:20:54.311164  499100 cri.go:89] found id: "6dd3b6f8859b6b73158b023597467ffd3bfbf74dba8207996ffb59ba32b783e5"
	I1009 20:20:54.311170  499100 cri.go:89] found id: "19c60abb724c168299c8076033b87385f420db683dd0f2474250da6b74aaf169"
	I1009 20:20:54.311174  499100 cri.go:89] found id: "3717764bae0d2e9c480c451663d8436220a28e339f7ea5f728f760e6db2361d2"
	I1009 20:20:54.311177  499100 cri.go:89] found id: "5bcf7f81c448e41e559806602e3f3a1d94582cbf78df0ab117caa5f14d6ba76a"
	I1009 20:20:54.311180  499100 cri.go:89] found id: "1de1928d9c10a7383f82f9d07f373a124ba301e004ce8acd88dd8a940cd3c874"
	I1009 20:20:54.311184  499100 cri.go:89] found id: "263af593d94482c92965e6f0511548fd1ccf9f2292e732c23158498a550ac2a4"
	I1009 20:20:54.311198  499100 cri.go:89] found id: "e15b99435508a3068f9f9d4d692dd1bd7f56391601b5b0179b6642e79aa3078f"
	I1009 20:20:54.311205  499100 cri.go:89] found id: "6d66a1c644fe699013f3d024b65f4dfa2c5f6bb2e344eef4ab51199503d6bb1f"
	I1009 20:20:54.311216  499100 cri.go:89] found id: "5a38898751c3190370fb093a21e09fb35270396f2335f7cf298f3ffeb676eab4"
	I1009 20:20:54.311225  499100 cri.go:89] found id: "dcee20808a0ab7b88a286f7a9fa5402833491c3468c0d98c6e6e41a4d387aeca"
	I1009 20:20:54.311228  499100 cri.go:89] found id: ""
	I1009 20:20:54.311277  499100 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:20:54.323363  499100 retry.go:31] will retry after 347.996059ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:20:54Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:20:54.672033  499100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:20:54.685662  499100 pause.go:52] kubelet running: false
	I1009 20:20:54.685755  499100 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:20:54.848822  499100 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:20:54.848959  499100 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:20:54.919326  499100 cri.go:89] found id: "0f270941e80ca04066ea9be417daf5c6ce5c2ec0888d5bfff2efb8528aeb3c92"
	I1009 20:20:54.919399  499100 cri.go:89] found id: "6dd3b6f8859b6b73158b023597467ffd3bfbf74dba8207996ffb59ba32b783e5"
	I1009 20:20:54.919410  499100 cri.go:89] found id: "19c60abb724c168299c8076033b87385f420db683dd0f2474250da6b74aaf169"
	I1009 20:20:54.919416  499100 cri.go:89] found id: "3717764bae0d2e9c480c451663d8436220a28e339f7ea5f728f760e6db2361d2"
	I1009 20:20:54.919420  499100 cri.go:89] found id: "5bcf7f81c448e41e559806602e3f3a1d94582cbf78df0ab117caa5f14d6ba76a"
	I1009 20:20:54.919423  499100 cri.go:89] found id: "1de1928d9c10a7383f82f9d07f373a124ba301e004ce8acd88dd8a940cd3c874"
	I1009 20:20:54.919427  499100 cri.go:89] found id: "263af593d94482c92965e6f0511548fd1ccf9f2292e732c23158498a550ac2a4"
	I1009 20:20:54.919430  499100 cri.go:89] found id: "e15b99435508a3068f9f9d4d692dd1bd7f56391601b5b0179b6642e79aa3078f"
	I1009 20:20:54.919434  499100 cri.go:89] found id: "6d66a1c644fe699013f3d024b65f4dfa2c5f6bb2e344eef4ab51199503d6bb1f"
	I1009 20:20:54.919440  499100 cri.go:89] found id: "5a38898751c3190370fb093a21e09fb35270396f2335f7cf298f3ffeb676eab4"
	I1009 20:20:54.919444  499100 cri.go:89] found id: "dcee20808a0ab7b88a286f7a9fa5402833491c3468c0d98c6e6e41a4d387aeca"
	I1009 20:20:54.919447  499100 cri.go:89] found id: ""
	I1009 20:20:54.919516  499100 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:20:54.934615  499100 out.go:203] 
	W1009 20:20:54.937528  499100 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:20:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:20:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 20:20:54.937549  499100 out.go:285] * 
	* 
	W1009 20:20:54.943111  499100 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:20:54.946085  499100 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-565110 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-565110
helpers_test.go:243: (dbg) docker inspect embed-certs-565110:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85",
	        "Created": "2025-10-09T20:18:08.202138688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496051,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:19:51.517235018Z",
	            "FinishedAt": "2025-10-09T20:19:50.210409037Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85/hostname",
	        "HostsPath": "/var/lib/docker/containers/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85/hosts",
	        "LogPath": "/var/lib/docker/containers/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85-json.log",
	        "Name": "/embed-certs-565110",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-565110:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-565110",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85",
	                "LowerDir": "/var/lib/docker/overlay2/1f20732bafca7c4ec6bbe75518ab73ef01fcee46e54c892cfb75e2f68114dce6-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f20732bafca7c4ec6bbe75518ab73ef01fcee46e54c892cfb75e2f68114dce6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f20732bafca7c4ec6bbe75518ab73ef01fcee46e54c892cfb75e2f68114dce6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f20732bafca7c4ec6bbe75518ab73ef01fcee46e54c892cfb75e2f68114dce6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-565110",
	                "Source": "/var/lib/docker/volumes/embed-certs-565110/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-565110",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-565110",
	                "name.minikube.sigs.k8s.io": "embed-certs-565110",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "058cbe69479446d85a6f1bc48737b773fb287e5104f6524338b6de4704a814e8",
	            "SandboxKey": "/var/run/docker/netns/058cbe694794",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-565110": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:2a:e8:98:bc:da",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c39245925c93cf03ed8abe3702c98fe11aa5fe2a748150abd863ee2a4578bafb",
	                    "EndpointID": "ca2e0a22e65243efa4cba5a9e2cf1e701a9f79cd2b5e6bf91b735f6228dcb83f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-565110",
	                        "5db0c011c608"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-565110 -n embed-certs-565110
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-565110 -n embed-certs-565110: exit status 2 (384.657673ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-565110 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-565110 logs -n 25: (1.27801823s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-670649 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-020313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	│ stop    │ -p no-preload-020313 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ image   │ old-k8s-version-670649 image list --format=json                                                                                                                                                                                               │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ pause   │ -p old-k8s-version-670649 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-020313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ start   │ -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:18 UTC │
	│ delete  │ -p old-k8s-version-670649                                                                                                                                                                                                                     │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ delete  │ -p old-k8s-version-670649                                                                                                                                                                                                                     │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:18 UTC │
	│ start   │ -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:19 UTC │
	│ image   │ no-preload-020313 image list --format=json                                                                                                                                                                                                    │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:18 UTC │
	│ pause   │ -p no-preload-020313 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │                     │
	│ delete  │ -p no-preload-020313                                                                                                                                                                                                                          │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ delete  │ -p no-preload-020313                                                                                                                                                                                                                          │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ delete  │ -p disable-driver-mounts-613966                                                                                                                                                                                                               │ disable-driver-mounts-613966 │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ start   │ -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-565110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │                     │
	│ stop    │ -p embed-certs-565110 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-565110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ start   │ -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-417984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-417984 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	│ image   │ embed-certs-565110 image list --format=json                                                                                                                                                                                                   │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:20 UTC │
	│ pause   │ -p embed-certs-565110 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:19:51
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:19:51.102892  495888 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:19:51.103497  495888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:19:51.103530  495888 out.go:374] Setting ErrFile to fd 2...
	I1009 20:19:51.103552  495888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:19:51.103850  495888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:19:51.104352  495888 out.go:368] Setting JSON to false
	I1009 20:19:51.105472  495888 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10931,"bootTime":1760030261,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:19:51.105578  495888 start.go:143] virtualization:  
	I1009 20:19:51.109357  495888 out.go:179] * [embed-certs-565110] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:19:51.112633  495888 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:19:51.112721  495888 notify.go:221] Checking for updates...
	I1009 20:19:51.116857  495888 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:19:51.119909  495888 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:19:51.122999  495888 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:19:51.126053  495888 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:19:51.131523  495888 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:19:51.136756  495888 config.go:182] Loaded profile config "embed-certs-565110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:19:51.137593  495888 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:19:51.187119  495888 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:19:51.187242  495888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:19:51.297737  495888 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:19:51.282094864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:19:51.297886  495888 docker.go:319] overlay module found
	I1009 20:19:51.303198  495888 out.go:179] * Using the docker driver based on existing profile
	I1009 20:19:51.306311  495888 start.go:309] selected driver: docker
	I1009 20:19:51.306342  495888 start.go:930] validating driver "docker" against &{Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:19:51.306461  495888 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:19:51.307345  495888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:19:51.421645  495888 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:19:51.408696868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:19:51.422009  495888 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:19:51.422036  495888 cni.go:84] Creating CNI manager for ""
	I1009 20:19:51.422095  495888 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:19:51.422133  495888 start.go:353] cluster config:
	{Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:19:51.425473  495888 out.go:179] * Starting "embed-certs-565110" primary control-plane node in "embed-certs-565110" cluster
	I1009 20:19:51.428909  495888 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:19:51.431951  495888 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:19:49.888113  492745 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:19:49.888143  492745 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:19:49.888230  492745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:19:49.908801  492745 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:19:49.908825  492745 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:19:49.908912  492745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:19:49.938475  492745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:19:49.953320  492745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:19:50.098321  492745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 20:19:50.189335  492745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:19:50.195949  492745 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:19:50.304820  492745 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:19:50.669429  492745 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1009 20:19:50.671170  492745 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-417984" to be "Ready" ...
	I1009 20:19:51.187848  492745 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-417984" context rescaled to 1 replicas
	I1009 20:19:51.426572  492745 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.230578231s)
	I1009 20:19:51.426624  492745 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.121763432s)
	I1009 20:19:51.451362  492745 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1009 20:19:51.434864  495888 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:19:51.434951  495888 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 20:19:51.434963  495888 cache.go:58] Caching tarball of preloaded images
	I1009 20:19:51.435068  495888 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 20:19:51.435079  495888 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 20:19:51.435197  495888 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/config.json ...
	I1009 20:19:51.435452  495888 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:19:51.457590  495888 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:19:51.457609  495888 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:19:51.457621  495888 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:19:51.457644  495888 start.go:361] acquireMachinesLock for embed-certs-565110: {Name:mk32ec325145c7dbf708685a0b7d3c4450230c14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:19:51.457699  495888 start.go:365] duration metric: took 38.269µs to acquireMachinesLock for "embed-certs-565110"
	I1009 20:19:51.457718  495888 start.go:97] Skipping create...Using existing machine configuration
	I1009 20:19:51.457724  495888 fix.go:55] fixHost starting: 
	I1009 20:19:51.457987  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:51.478706  495888 fix.go:113] recreateIfNeeded on embed-certs-565110: state=Stopped err=<nil>
	W1009 20:19:51.478734  495888 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 20:19:51.454825  492745 addons.go:514] duration metric: took 1.631795731s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1009 20:19:51.482974  495888 out.go:252] * Restarting existing docker container for "embed-certs-565110" ...
	I1009 20:19:51.483091  495888 cli_runner.go:164] Run: docker start embed-certs-565110
	I1009 20:19:51.807227  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:51.842702  495888 kic.go:430] container "embed-certs-565110" state is running.
	I1009 20:19:51.843128  495888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-565110
	I1009 20:19:51.877524  495888 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/config.json ...
	I1009 20:19:51.877765  495888 machine.go:93] provisionDockerMachine start ...
	I1009 20:19:51.877836  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:51.907208  495888 main.go:141] libmachine: Using SSH client type: native
	I1009 20:19:51.907536  495888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1009 20:19:51.907553  495888 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:19:51.908974  495888 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33816->127.0.0.1:33446: read: connection reset by peer
	I1009 20:19:55.081220  495888 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-565110
	
	I1009 20:19:55.081310  495888 ubuntu.go:182] provisioning hostname "embed-certs-565110"
	I1009 20:19:55.081383  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:55.100169  495888 main.go:141] libmachine: Using SSH client type: native
	I1009 20:19:55.100476  495888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1009 20:19:55.100493  495888 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-565110 && echo "embed-certs-565110" | sudo tee /etc/hostname
	I1009 20:19:55.264075  495888 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-565110
	
	I1009 20:19:55.264186  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:55.282454  495888 main.go:141] libmachine: Using SSH client type: native
	I1009 20:19:55.282834  495888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1009 20:19:55.282859  495888 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-565110' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-565110/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-565110' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:19:55.433702  495888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:19:55.433729  495888 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:19:55.433752  495888 ubuntu.go:190] setting up certificates
	I1009 20:19:55.433762  495888 provision.go:84] configureAuth start
	I1009 20:19:55.433835  495888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-565110
	I1009 20:19:55.451034  495888 provision.go:143] copyHostCerts
	I1009 20:19:55.451107  495888 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:19:55.451131  495888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:19:55.451208  495888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:19:55.451360  495888 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:19:55.451370  495888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:19:55.451400  495888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:19:55.451482  495888 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:19:55.451493  495888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:19:55.451520  495888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:19:55.451581  495888 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.embed-certs-565110 san=[127.0.0.1 192.168.76.2 embed-certs-565110 localhost minikube]
	I1009 20:19:55.723228  495888 provision.go:177] copyRemoteCerts
	I1009 20:19:55.723701  495888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:19:55.723756  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:55.745356  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:55.853673  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:19:55.872520  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 20:19:55.891414  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:19:55.911282  495888 provision.go:87] duration metric: took 477.503506ms to configureAuth
	I1009 20:19:55.911322  495888 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:19:55.911556  495888 config.go:182] Loaded profile config "embed-certs-565110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:19:55.911693  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:55.935681  495888 main.go:141] libmachine: Using SSH client type: native
	I1009 20:19:55.935991  495888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1009 20:19:55.936007  495888 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1009 20:19:52.674126  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:19:54.674242  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:19:56.675208  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	I1009 20:19:56.260763  495888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:19:56.260789  495888 machine.go:96] duration metric: took 4.383005849s to provisionDockerMachine
	I1009 20:19:56.260800  495888 start.go:294] postStartSetup for "embed-certs-565110" (driver="docker")
	I1009 20:19:56.260819  495888 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:19:56.260900  495888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:19:56.260943  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:56.286630  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:56.390555  495888 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:19:56.395007  495888 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:19:56.395034  495888 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:19:56.395044  495888 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:19:56.395097  495888 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:19:56.395176  495888 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:19:56.395272  495888 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:19:56.402958  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:19:56.424396  495888 start.go:297] duration metric: took 163.580707ms for postStartSetup
	I1009 20:19:56.424478  495888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:19:56.424533  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:56.447227  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:56.550726  495888 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:19:56.556179  495888 fix.go:57] duration metric: took 5.098447768s for fixHost
	I1009 20:19:56.556209  495888 start.go:84] releasing machines lock for "embed-certs-565110", held for 5.098501504s
	I1009 20:19:56.556286  495888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-565110
	I1009 20:19:56.573374  495888 ssh_runner.go:195] Run: cat /version.json
	I1009 20:19:56.573416  495888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:19:56.573438  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:56.573478  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:56.593761  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:56.624539  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:56.701349  495888 ssh_runner.go:195] Run: systemctl --version
	I1009 20:19:56.800207  495888 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:19:56.837954  495888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:19:56.842936  495888 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:19:56.843020  495888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:19:56.851187  495888 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 20:19:56.851220  495888 start.go:496] detecting cgroup driver to use...
	I1009 20:19:56.851267  495888 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 20:19:56.851338  495888 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:19:56.868899  495888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:19:56.882641  495888 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:19:56.882748  495888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:19:56.901981  495888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:19:56.922675  495888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:19:57.045263  495888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:19:57.164062  495888 docker.go:234] disabling docker service ...
	I1009 20:19:57.164140  495888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:19:57.182535  495888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:19:57.196529  495888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:19:57.316352  495888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:19:57.436860  495888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:19:57.451031  495888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:19:57.466163  495888 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:19:57.466305  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.475527  495888 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:19:57.475677  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.485065  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.494276  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.503522  495888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:19:57.512068  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.527270  495888 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.536150  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.547538  495888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:19:57.555776  495888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:19:57.563474  495888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:19:57.687781  495888 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:19:57.832964  495888 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:19:57.833043  495888 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:19:57.837082  495888 start.go:564] Will wait 60s for crictl version
	I1009 20:19:57.837268  495888 ssh_runner.go:195] Run: which crictl
	I1009 20:19:57.841002  495888 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:19:57.884119  495888 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:19:57.884206  495888 ssh_runner.go:195] Run: crio --version
	I1009 20:19:57.920601  495888 ssh_runner.go:195] Run: crio --version
	I1009 20:19:57.953231  495888 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 20:19:57.956094  495888 cli_runner.go:164] Run: docker network inspect embed-certs-565110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:19:57.973183  495888 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 20:19:57.977379  495888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:19:57.987566  495888 kubeadm.go:883] updating cluster {Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:19:57.987690  495888 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:19:57.987753  495888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:19:58.034743  495888 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:19:58.034768  495888 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:19:58.034837  495888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:19:58.063612  495888 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:19:58.063641  495888 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:19:58.063649  495888 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 20:19:58.063757  495888 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-565110 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:19:58.063850  495888 ssh_runner.go:195] Run: crio config
	I1009 20:19:58.119226  495888 cni.go:84] Creating CNI manager for ""
	I1009 20:19:58.119250  495888 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:19:58.119270  495888 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 20:19:58.119317  495888 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-565110 NodeName:embed-certs-565110 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:19:58.119477  495888 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-565110"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:19:58.119554  495888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:19:58.127994  495888 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:19:58.128078  495888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:19:58.136084  495888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1009 20:19:58.150168  495888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:19:58.164940  495888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1009 20:19:58.181309  495888 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:19:58.185366  495888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:19:58.195602  495888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:19:58.316882  495888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:19:58.332912  495888 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110 for IP: 192.168.76.2
	I1009 20:19:58.332938  495888 certs.go:195] generating shared ca certs ...
	I1009 20:19:58.332955  495888 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:58.333097  495888 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:19:58.333194  495888 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:19:58.333206  495888 certs.go:257] generating profile certs ...
	I1009 20:19:58.333308  495888 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/client.key
	I1009 20:19:58.333377  495888 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key.e7b9ab9d
	I1009 20:19:58.333427  495888 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.key
	I1009 20:19:58.333542  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:19:58.333574  495888 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:19:58.333587  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:19:58.333618  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:19:58.333645  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:19:58.333674  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:19:58.333723  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:19:58.334393  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:19:58.356891  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:19:58.378429  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:19:58.402388  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:19:58.429843  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1009 20:19:58.457145  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:19:58.482912  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:19:58.511578  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:19:58.532342  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:19:58.560879  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:19:58.585808  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:19:58.606843  495888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:19:58.621525  495888 ssh_runner.go:195] Run: openssl version
	I1009 20:19:58.628148  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:19:58.637529  495888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:19:58.641561  495888 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:19:58.641652  495888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:19:58.687261  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:19:58.695792  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:19:58.704978  495888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:19:58.709478  495888 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:19:58.709569  495888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:19:58.751071  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:19:58.759246  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:19:58.767814  495888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:19:58.772140  495888 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:19:58.772208  495888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:19:58.813601  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:19:58.821600  495888 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:19:58.825585  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:19:58.867122  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:19:58.915221  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:19:58.961581  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:19:59.019501  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:19:59.064706  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:19:59.118556  495888 kubeadm.go:400] StartCluster: {Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:19:59.118710  495888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:19:59.118804  495888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:19:59.210857  495888 cri.go:89] found id: "1de1928d9c10a7383f82f9d07f373a124ba301e004ce8acd88dd8a940cd3c874"
	I1009 20:19:59.210931  495888 cri.go:89] found id: "263af593d94482c92965e6f0511548fd1ccf9f2292e732c23158498a550ac2a4"
	I1009 20:19:59.210953  495888 cri.go:89] found id: "e15b99435508a3068f9f9d4d692dd1bd7f56391601b5b0179b6642e79aa3078f"
	I1009 20:19:59.210979  495888 cri.go:89] found id: "6d66a1c644fe699013f3d024b65f4dfa2c5f6bb2e344eef4ab51199503d6bb1f"
	I1009 20:19:59.211008  495888 cri.go:89] found id: ""
	I1009 20:19:59.211087  495888 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 20:19:59.238031  495888 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:19:59Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:19:59.238192  495888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:19:59.251518  495888 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 20:19:59.251585  495888 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 20:19:59.251666  495888 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:19:59.267653  495888 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:19:59.268282  495888 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-565110" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:19:59.268619  495888 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-294150/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-565110" cluster setting kubeconfig missing "embed-certs-565110" context setting]
	I1009 20:19:59.269134  495888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:59.270821  495888 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:19:59.290193  495888 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1009 20:19:59.290271  495888 kubeadm.go:601] duration metric: took 38.666826ms to restartPrimaryControlPlane
	I1009 20:19:59.290297  495888 kubeadm.go:402] duration metric: took 171.753193ms to StartCluster
	I1009 20:19:59.290339  495888 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:59.290426  495888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:19:59.292269  495888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:59.297424  495888 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:19:59.297656  495888 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:19:59.301070  495888 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-565110"
	I1009 20:19:59.301153  495888 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-565110"
	W1009 20:19:59.301182  495888 addons.go:247] addon storage-provisioner should already be in state true
	I1009 20:19:59.301226  495888 host.go:66] Checking if "embed-certs-565110" exists ...
	I1009 20:19:59.301774  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:59.304957  495888 out.go:179] * Verifying Kubernetes components...
	I1009 20:19:59.308175  495888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:19:59.311519  495888 addons.go:69] Setting default-storageclass=true in profile "embed-certs-565110"
	I1009 20:19:59.311558  495888 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-565110"
	I1009 20:19:59.311891  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:59.321256  495888 addons.go:69] Setting dashboard=true in profile "embed-certs-565110"
	I1009 20:19:59.321286  495888 addons.go:238] Setting addon dashboard=true in "embed-certs-565110"
	W1009 20:19:59.321295  495888 addons.go:247] addon dashboard should already be in state true
	I1009 20:19:59.321329  495888 host.go:66] Checking if "embed-certs-565110" exists ...
	I1009 20:19:59.321807  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:59.297901  495888 config.go:182] Loaded profile config "embed-certs-565110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:19:59.346629  495888 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:19:59.351650  495888 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:19:59.351675  495888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:19:59.351737  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:59.370094  495888 addons.go:238] Setting addon default-storageclass=true in "embed-certs-565110"
	W1009 20:19:59.370120  495888 addons.go:247] addon default-storageclass should already be in state true
	I1009 20:19:59.370146  495888 host.go:66] Checking if "embed-certs-565110" exists ...
	I1009 20:19:59.370575  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:59.387047  495888 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 20:19:59.390364  495888 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 20:19:59.396710  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 20:19:59.396746  495888 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 20:19:59.396832  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:59.429347  495888 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:19:59.429370  495888 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:19:59.429437  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:59.430849  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:59.464220  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:59.473433  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:59.694365  495888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:19:59.717524  495888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:19:59.840086  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 20:19:59.840111  495888 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 20:19:59.864918  495888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:19:59.880095  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 20:19:59.880122  495888 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 20:19:59.900442  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 20:19:59.900468  495888 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 20:19:59.917184  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 20:19:59.917207  495888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 20:19:59.939131  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 20:19:59.939158  495888 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 20:19:59.990478  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 20:19:59.990505  495888 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 20:20:00.111657  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 20:20:00.111680  495888 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 20:20:00.389825  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 20:20:00.389849  495888 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 20:20:00.555191  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 20:20:00.555224  495888 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 20:20:00.588057  495888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1009 20:19:59.174811  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:01.175375  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	I1009 20:20:05.769604  495888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.075202514s)
	I1009 20:20:05.769670  495888 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.052119796s)
	I1009 20:20:05.769700  495888 node_ready.go:35] waiting up to 6m0s for node "embed-certs-565110" to be "Ready" ...
	I1009 20:20:05.770038  495888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.905095554s)
	I1009 20:20:05.812828  495888 node_ready.go:49] node "embed-certs-565110" is "Ready"
	I1009 20:20:05.812865  495888 node_ready.go:38] duration metric: took 43.143299ms for node "embed-certs-565110" to be "Ready" ...
	I1009 20:20:05.812881  495888 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:20:05.812944  495888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:05.956969  495888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.368861055s)
	I1009 20:20:05.957203  495888 api_server.go:72] duration metric: took 6.656317655s to wait for apiserver process to appear ...
	I1009 20:20:05.957223  495888 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:20:05.957275  495888 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:20:05.960249  495888 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-565110 addons enable metrics-server
	
	I1009 20:20:05.963094  495888 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1009 20:20:05.966069  495888 addons.go:514] duration metric: took 6.668415407s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1009 20:20:05.968487  495888 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1009 20:20:05.969658  495888 api_server.go:141] control plane version: v1.34.1
	I1009 20:20:05.969700  495888 api_server.go:131] duration metric: took 12.429949ms to wait for apiserver health ...
	I1009 20:20:05.969710  495888 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:20:05.973374  495888 system_pods.go:59] 8 kube-system pods found
	I1009 20:20:05.973419  495888 system_pods.go:61] "coredns-66bc5c9577-zmqwp" [ff3de144-4c77-4486-be1e-ab88492e6a18] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:05.973429  495888 system_pods.go:61] "etcd-embed-certs-565110" [4ad4c426-96dc-4bd7-bf86-efc6658f3526] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:20:05.973434  495888 system_pods.go:61] "kindnet-mjfwz" [f079f818-4d35-4673-ab85-6b2fe322c9f9] Running
	I1009 20:20:05.973441  495888 system_pods.go:61] "kube-apiserver-embed-certs-565110" [5a497a15-f487-4c78-bf3e-a53c6d9f83db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:20:05.973449  495888 system_pods.go:61] "kube-controller-manager-embed-certs-565110" [7460b871-81b4-49ff-bad1-b30126a8635c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:20:05.973454  495888 system_pods.go:61] "kube-proxy-bhwvw" [f9d0b727-064f-4a1c-88e2-e238e5f43c4b] Running
	I1009 20:20:05.973470  495888 system_pods.go:61] "kube-scheduler-embed-certs-565110" [f706c945-9f4f-4f6d-83f8-c6cddb3ff41d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:20:05.973474  495888 system_pods.go:61] "storage-provisioner" [9811b3ef-6b1c-42ea-a8c8-bdf0028bd024] Running
	I1009 20:20:05.973480  495888 system_pods.go:74] duration metric: took 3.763873ms to wait for pod list to return data ...
	I1009 20:20:05.973491  495888 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:20:05.976144  495888 default_sa.go:45] found service account: "default"
	I1009 20:20:05.976166  495888 default_sa.go:55] duration metric: took 2.669804ms for default service account to be created ...
	I1009 20:20:05.976174  495888 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:20:05.980886  495888 system_pods.go:86] 8 kube-system pods found
	I1009 20:20:05.980930  495888 system_pods.go:89] "coredns-66bc5c9577-zmqwp" [ff3de144-4c77-4486-be1e-ab88492e6a18] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:05.980940  495888 system_pods.go:89] "etcd-embed-certs-565110" [4ad4c426-96dc-4bd7-bf86-efc6658f3526] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:20:05.980946  495888 system_pods.go:89] "kindnet-mjfwz" [f079f818-4d35-4673-ab85-6b2fe322c9f9] Running
	I1009 20:20:05.980955  495888 system_pods.go:89] "kube-apiserver-embed-certs-565110" [5a497a15-f487-4c78-bf3e-a53c6d9f83db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:20:05.980963  495888 system_pods.go:89] "kube-controller-manager-embed-certs-565110" [7460b871-81b4-49ff-bad1-b30126a8635c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:20:05.980968  495888 system_pods.go:89] "kube-proxy-bhwvw" [f9d0b727-064f-4a1c-88e2-e238e5f43c4b] Running
	I1009 20:20:05.980992  495888 system_pods.go:89] "kube-scheduler-embed-certs-565110" [f706c945-9f4f-4f6d-83f8-c6cddb3ff41d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:20:05.981000  495888 system_pods.go:89] "storage-provisioner" [9811b3ef-6b1c-42ea-a8c8-bdf0028bd024] Running
	I1009 20:20:05.981007  495888 system_pods.go:126] duration metric: took 4.827699ms to wait for k8s-apps to be running ...
	I1009 20:20:05.981021  495888 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:20:05.981085  495888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:20:05.997748  495888 system_svc.go:56] duration metric: took 16.717209ms WaitForService to wait for kubelet
	I1009 20:20:05.997790  495888 kubeadm.go:586] duration metric: took 6.696906359s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:20:05.997808  495888 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:20:06.009632  495888 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:20:06.009684  495888 node_conditions.go:123] node cpu capacity is 2
	I1009 20:20:06.009700  495888 node_conditions.go:105] duration metric: took 11.886647ms to run NodePressure ...
	I1009 20:20:06.009715  495888 start.go:242] waiting for startup goroutines ...
	I1009 20:20:06.009723  495888 start.go:247] waiting for cluster config update ...
	I1009 20:20:06.009735  495888 start.go:256] writing updated cluster config ...
	I1009 20:20:06.010128  495888 ssh_runner.go:195] Run: rm -f paused
	I1009 20:20:06.015468  495888 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:20:06.020318  495888 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zmqwp" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 20:20:03.675557  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:06.175568  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:08.026794  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:10.028306  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:08.674193  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:11.174725  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:12.030211  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:14.031972  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:13.174928  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:15.674695  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:16.527708  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:19.026340  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:17.675546  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:20.174799  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:21.526662  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:24.026394  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:26.026883  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:22.674016  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:24.674813  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:28.027437  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:30.082635  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:27.174625  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:29.675082  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	I1009 20:20:32.174806  492745 node_ready.go:49] node "default-k8s-diff-port-417984" is "Ready"
	I1009 20:20:32.174843  492745 node_ready.go:38] duration metric: took 41.503651759s for node "default-k8s-diff-port-417984" to be "Ready" ...
	I1009 20:20:32.174857  492745 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:20:32.174913  492745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:32.190088  492745 api_server.go:72] duration metric: took 42.367482347s to wait for apiserver process to appear ...
	I1009 20:20:32.190112  492745 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:20:32.190133  492745 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1009 20:20:32.198669  492745 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1009 20:20:32.199868  492745 api_server.go:141] control plane version: v1.34.1
	I1009 20:20:32.199893  492745 api_server.go:131] duration metric: took 9.773485ms to wait for apiserver health ...
	I1009 20:20:32.199901  492745 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:20:32.205780  492745 system_pods.go:59] 8 kube-system pods found
	I1009 20:20:32.205895  492745 system_pods.go:61] "coredns-66bc5c9577-4c2vb" [1372d4eb-13df-43ba-add1-18330c9c110d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:32.205919  492745 system_pods.go:61] "etcd-default-k8s-diff-port-417984" [2f46d319-463a-4bf1-b9f0-33d017fe17c5] Running
	I1009 20:20:32.205965  492745 system_pods.go:61] "kindnet-s57gp" [c69cde96-0e11-4f41-a715-961981d36066] Running
	I1009 20:20:32.205987  492745 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-417984" [fff706cc-3c18-400c-9fb7-10cec1723bc7] Running
	I1009 20:20:32.206011  492745 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-417984" [b8ecf531-e830-4a99-abcc-1fc8175c1598] Running
	I1009 20:20:32.206050  492745 system_pods.go:61] "kube-proxy-jnlzf" [c888f2c2-aaea-43d1-b81a-fe2762b4f733] Running
	I1009 20:20:32.206087  492745 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-417984" [27737a80-8846-4a8f-b4c6-2845ddca3cca] Running
	I1009 20:20:32.206111  492745 system_pods.go:61] "storage-provisioner" [35085697-b4c2-4265-a1eb-2ced25791f19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:20:32.206136  492745 system_pods.go:74] duration metric: took 6.227019ms to wait for pod list to return data ...
	I1009 20:20:32.206167  492745 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:20:32.209460  492745 default_sa.go:45] found service account: "default"
	I1009 20:20:32.209485  492745 default_sa.go:55] duration metric: took 3.292588ms for default service account to be created ...
	I1009 20:20:32.209494  492745 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:20:32.213461  492745 system_pods.go:86] 8 kube-system pods found
	I1009 20:20:32.213493  492745 system_pods.go:89] "coredns-66bc5c9577-4c2vb" [1372d4eb-13df-43ba-add1-18330c9c110d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:32.213501  492745 system_pods.go:89] "etcd-default-k8s-diff-port-417984" [2f46d319-463a-4bf1-b9f0-33d017fe17c5] Running
	I1009 20:20:32.213507  492745 system_pods.go:89] "kindnet-s57gp" [c69cde96-0e11-4f41-a715-961981d36066] Running
	I1009 20:20:32.213512  492745 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417984" [fff706cc-3c18-400c-9fb7-10cec1723bc7] Running
	I1009 20:20:32.213516  492745 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417984" [b8ecf531-e830-4a99-abcc-1fc8175c1598] Running
	I1009 20:20:32.213521  492745 system_pods.go:89] "kube-proxy-jnlzf" [c888f2c2-aaea-43d1-b81a-fe2762b4f733] Running
	I1009 20:20:32.213525  492745 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417984" [27737a80-8846-4a8f-b4c6-2845ddca3cca] Running
	I1009 20:20:32.213530  492745 system_pods.go:89] "storage-provisioner" [35085697-b4c2-4265-a1eb-2ced25791f19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:20:32.213551  492745 retry.go:31] will retry after 208.993801ms: missing components: kube-dns
	I1009 20:20:32.426651  492745 system_pods.go:86] 8 kube-system pods found
	I1009 20:20:32.426688  492745 system_pods.go:89] "coredns-66bc5c9577-4c2vb" [1372d4eb-13df-43ba-add1-18330c9c110d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:32.426697  492745 system_pods.go:89] "etcd-default-k8s-diff-port-417984" [2f46d319-463a-4bf1-b9f0-33d017fe17c5] Running
	I1009 20:20:32.426706  492745 system_pods.go:89] "kindnet-s57gp" [c69cde96-0e11-4f41-a715-961981d36066] Running
	I1009 20:20:32.426710  492745 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417984" [fff706cc-3c18-400c-9fb7-10cec1723bc7] Running
	I1009 20:20:32.426715  492745 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417984" [b8ecf531-e830-4a99-abcc-1fc8175c1598] Running
	I1009 20:20:32.426720  492745 system_pods.go:89] "kube-proxy-jnlzf" [c888f2c2-aaea-43d1-b81a-fe2762b4f733] Running
	I1009 20:20:32.426724  492745 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417984" [27737a80-8846-4a8f-b4c6-2845ddca3cca] Running
	I1009 20:20:32.426729  492745 system_pods.go:89] "storage-provisioner" [35085697-b4c2-4265-a1eb-2ced25791f19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:20:32.426744  492745 retry.go:31] will retry after 247.744501ms: missing components: kube-dns
	I1009 20:20:32.678852  492745 system_pods.go:86] 8 kube-system pods found
	I1009 20:20:32.678888  492745 system_pods.go:89] "coredns-66bc5c9577-4c2vb" [1372d4eb-13df-43ba-add1-18330c9c110d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:32.678896  492745 system_pods.go:89] "etcd-default-k8s-diff-port-417984" [2f46d319-463a-4bf1-b9f0-33d017fe17c5] Running
	I1009 20:20:32.678902  492745 system_pods.go:89] "kindnet-s57gp" [c69cde96-0e11-4f41-a715-961981d36066] Running
	I1009 20:20:32.678906  492745 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417984" [fff706cc-3c18-400c-9fb7-10cec1723bc7] Running
	I1009 20:20:32.678910  492745 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417984" [b8ecf531-e830-4a99-abcc-1fc8175c1598] Running
	I1009 20:20:32.678914  492745 system_pods.go:89] "kube-proxy-jnlzf" [c888f2c2-aaea-43d1-b81a-fe2762b4f733] Running
	I1009 20:20:32.678918  492745 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417984" [27737a80-8846-4a8f-b4c6-2845ddca3cca] Running
	I1009 20:20:32.678928  492745 system_pods.go:89] "storage-provisioner" [35085697-b4c2-4265-a1eb-2ced25791f19] Running
	I1009 20:20:32.678941  492745 system_pods.go:126] duration metric: took 469.440984ms to wait for k8s-apps to be running ...
	I1009 20:20:32.678952  492745 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:20:32.679021  492745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:20:32.695608  492745 system_svc.go:56] duration metric: took 16.635802ms WaitForService to wait for kubelet
	I1009 20:20:32.695641  492745 kubeadm.go:586] duration metric: took 42.873046641s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:20:32.695745  492745 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:20:32.699281  492745 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:20:32.699327  492745 node_conditions.go:123] node cpu capacity is 2
	I1009 20:20:32.699341  492745 node_conditions.go:105] duration metric: took 3.588625ms to run NodePressure ...
	I1009 20:20:32.699353  492745 start.go:242] waiting for startup goroutines ...
	I1009 20:20:32.699362  492745 start.go:247] waiting for cluster config update ...
	I1009 20:20:32.699378  492745 start.go:256] writing updated cluster config ...
	I1009 20:20:32.699753  492745 ssh_runner.go:195] Run: rm -f paused
	I1009 20:20:32.704321  492745 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:20:32.708386  492745 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4c2vb" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.714486  492745 pod_ready.go:94] pod "coredns-66bc5c9577-4c2vb" is "Ready"
	I1009 20:20:33.714516  492745 pod_ready.go:86] duration metric: took 1.006106251s for pod "coredns-66bc5c9577-4c2vb" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.717510  492745 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.724030  492745 pod_ready.go:94] pod "etcd-default-k8s-diff-port-417984" is "Ready"
	I1009 20:20:33.724067  492745 pod_ready.go:86] duration metric: took 6.523752ms for pod "etcd-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.727654  492745 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.732867  492745 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-417984" is "Ready"
	I1009 20:20:33.732899  492745 pod_ready.go:86] duration metric: took 5.219538ms for pod "kube-apiserver-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.735435  492745 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.914645  492745 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-417984" is "Ready"
	I1009 20:20:33.914725  492745 pod_ready.go:86] duration metric: took 179.260924ms for pod "kube-controller-manager-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:34.112850  492745 pod_ready.go:83] waiting for pod "kube-proxy-jnlzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:34.512679  492745 pod_ready.go:94] pod "kube-proxy-jnlzf" is "Ready"
	I1009 20:20:34.512722  492745 pod_ready.go:86] duration metric: took 399.843804ms for pod "kube-proxy-jnlzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:34.713169  492745 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:35.113508  492745 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-417984" is "Ready"
	I1009 20:20:35.113547  492745 pod_ready.go:86] duration metric: took 400.349632ms for pod "kube-scheduler-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:35.113560  492745 pod_ready.go:40] duration metric: took 2.409163956s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:20:35.180770  492745 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:20:35.185314  492745 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-417984" cluster and "default" namespace by default
	W1009 20:20:32.526518  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:35.026457  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:37.028006  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	I1009 20:20:39.026861  495888 pod_ready.go:94] pod "coredns-66bc5c9577-zmqwp" is "Ready"
	I1009 20:20:39.026887  495888 pod_ready.go:86] duration metric: took 33.006531676s for pod "coredns-66bc5c9577-zmqwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.029834  495888 pod_ready.go:83] waiting for pod "etcd-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.039610  495888 pod_ready.go:94] pod "etcd-embed-certs-565110" is "Ready"
	I1009 20:20:39.039636  495888 pod_ready.go:86] duration metric: took 9.73968ms for pod "etcd-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.042389  495888 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.047521  495888 pod_ready.go:94] pod "kube-apiserver-embed-certs-565110" is "Ready"
	I1009 20:20:39.047551  495888 pod_ready.go:86] duration metric: took 5.132432ms for pod "kube-apiserver-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.050305  495888 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.224004  495888 pod_ready.go:94] pod "kube-controller-manager-embed-certs-565110" is "Ready"
	I1009 20:20:39.224037  495888 pod_ready.go:86] duration metric: took 173.70233ms for pod "kube-controller-manager-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.424038  495888 pod_ready.go:83] waiting for pod "kube-proxy-bhwvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.823381  495888 pod_ready.go:94] pod "kube-proxy-bhwvw" is "Ready"
	I1009 20:20:39.823451  495888 pod_ready.go:86] duration metric: took 399.38654ms for pod "kube-proxy-bhwvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:40.043782  495888 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:40.424476  495888 pod_ready.go:94] pod "kube-scheduler-embed-certs-565110" is "Ready"
	I1009 20:20:40.424500  495888 pod_ready.go:86] duration metric: took 380.690278ms for pod "kube-scheduler-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:40.424512  495888 pod_ready.go:40] duration metric: took 34.409013666s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:20:40.482252  495888 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:20:40.485424  495888 out.go:179] * Done! kubectl is now configured to use "embed-certs-565110" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.659042377Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.664574511Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.664612296Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.664629486Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.668372205Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.668407873Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.668425596Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.672822675Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.672924904Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.67295482Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.676878104Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.676917473Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.560850754Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0d7676a1-e960-4971-ad41-92bd72a13986 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.562747755Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7a44e91f-86cf-41a5-bec2-b5582d19b765 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.563758657Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph/dashboard-metrics-scraper" id=332fcc8b-fd5d-406a-887f-bd2bc413cf04 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.563972764Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.578401849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.579001718Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.604121862Z" level=info msg="Created container 5a38898751c3190370fb093a21e09fb35270396f2335f7cf298f3ffeb676eab4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph/dashboard-metrics-scraper" id=332fcc8b-fd5d-406a-887f-bd2bc413cf04 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.605368738Z" level=info msg="Starting container: 5a38898751c3190370fb093a21e09fb35270396f2335f7cf298f3ffeb676eab4" id=39ef6579-22b9-4fde-9b6c-a5822d9dfbbd name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.608477668Z" level=info msg="Started container" PID=1735 containerID=5a38898751c3190370fb093a21e09fb35270396f2335f7cf298f3ffeb676eab4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph/dashboard-metrics-scraper id=39ef6579-22b9-4fde-9b6c-a5822d9dfbbd name=/runtime.v1.RuntimeService/StartContainer sandboxID=88693c847ec07cbc177bebd52670c8cf497a87577e562e09e963df26cb1f2eae
	Oct 09 20:20:52 embed-certs-565110 conmon[1733]: conmon 5a38898751c3190370fb <ninfo>: container 1735 exited with status 1
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.856397595Z" level=info msg="Removing container: 02a0a6e50a9df815b7a8e5622e75cf0fd7bb066c9f4c849fe03efa883d3c54e9" id=d9f728bb-fd89-49b9-a1f4-7a1269093203 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.866793019Z" level=info msg="Error loading conmon cgroup of container 02a0a6e50a9df815b7a8e5622e75cf0fd7bb066c9f4c849fe03efa883d3c54e9: cgroup deleted" id=d9f728bb-fd89-49b9-a1f4-7a1269093203 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.870283362Z" level=info msg="Removed container 02a0a6e50a9df815b7a8e5622e75cf0fd7bb066c9f4c849fe03efa883d3c54e9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph/dashboard-metrics-scraper" id=d9f728bb-fd89-49b9-a1f4-7a1269093203 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5a38898751c31       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           3 seconds ago       Exited              dashboard-metrics-scraper   3                   88693c847ec07       dashboard-metrics-scraper-6ffb444bf9-wvnph   kubernetes-dashboard
	0f270941e80ca       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   07ed3ab732764       storage-provisioner                          kube-system
	dcee20808a0ab       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago      Running             kubernetes-dashboard        0                   2f659c12feea9       kubernetes-dashboard-855c9754f9-f7ckg        kubernetes-dashboard
	a5b26ac31259e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   cf6d3d0163c79       busybox                                      default
	6dd3b6f8859b6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago      Running             coredns                     1                   c254ad9d2fd5e       coredns-66bc5c9577-zmqwp                     kube-system
	19c60abb724c1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   560c70353a458       kindnet-mjfwz                                kube-system
	3717764bae0d2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   07ed3ab732764       storage-provisioner                          kube-system
	5bcf7f81c448e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago      Running             kube-proxy                  1                   63bc433297ab8       kube-proxy-bhwvw                             kube-system
	1de1928d9c10a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           56 seconds ago      Running             kube-apiserver              1                   e775b526799a8       kube-apiserver-embed-certs-565110            kube-system
	263af593d9448       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           56 seconds ago      Running             kube-controller-manager     1                   b3ca0480c2694       kube-controller-manager-embed-certs-565110   kube-system
	e15b99435508a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           56 seconds ago      Running             kube-scheduler              1                   6bc2934ce3cd2       kube-scheduler-embed-certs-565110            kube-system
	6d66a1c644fe6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           56 seconds ago      Running             etcd                        1                   54d46624eb736       etcd-embed-certs-565110                      kube-system
	
	
	==> coredns [6dd3b6f8859b6b73158b023597467ffd3bfbf74dba8207996ffb59ba32b783e5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35024 - 6237 "HINFO IN 4570175342769222036.437017538883807812. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014932234s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-565110
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-565110
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=embed-certs-565110
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T20_18_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 20:18:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-565110
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 20:20:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 20:20:35 +0000   Thu, 09 Oct 2025 20:18:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 20:20:35 +0000   Thu, 09 Oct 2025 20:18:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 20:20:35 +0000   Thu, 09 Oct 2025 20:18:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 20:20:35 +0000   Thu, 09 Oct 2025 20:19:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-565110
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 832389f4a3984c1ba73cd231980de142
	  System UUID:                b35d8597-f430-4f2f-bbdb-0cd122e89c1c
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-zmqwp                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m15s
	  kube-system                 etcd-embed-certs-565110                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-mjfwz                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-embed-certs-565110             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-embed-certs-565110    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-bhwvw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-embed-certs-565110             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wvnph    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-f7ckg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m14s              kube-proxy       
	  Normal   Starting                 49s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m21s              kubelet          Node embed-certs-565110 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m21s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m21s              kubelet          Node embed-certs-565110 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m21s              kubelet          Node embed-certs-565110 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m21s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m17s              node-controller  Node embed-certs-565110 event: Registered Node embed-certs-565110 in Controller
	  Normal   NodeReady                94s                kubelet          Node embed-certs-565110 status is now: NodeReady
	  Normal   Starting                 58s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 58s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node embed-certs-565110 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node embed-certs-565110 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node embed-certs-565110 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                node-controller  Node embed-certs-565110 event: Registered Node embed-certs-565110 in Controller
	
	
	==> dmesg <==
	[Oct 9 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:50] overlayfs: idmapped layers are currently not supported
	[ +27.967875] overlayfs: idmapped layers are currently not supported
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:04] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:15] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:16] overlayfs: idmapped layers are currently not supported
	[ +23.810739] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:18] overlayfs: idmapped layers are currently not supported
	[ +26.082927] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:19] overlayfs: idmapped layers are currently not supported
	[ +21.956614] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6d66a1c644fe699013f3d024b65f4dfa2c5f6bb2e344eef4ab51199503d6bb1f] <==
	{"level":"warn","ts":"2025-10-09T20:20:02.235584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.263923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.277952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.307215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.325402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.355519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.365513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.381357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.401240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.418634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.443512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.466983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.483938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.499351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.514502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.539838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.551996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.568061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.586113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.603809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.629191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.657391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.679408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.689668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.794493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49846","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:20:56 up  3:03,  0 user,  load average: 3.24, 2.93, 2.14
	Linux embed-certs-565110 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19c60abb724c168299c8076033b87385f420db683dd0f2474250da6b74aaf169] <==
	I1009 20:20:05.429732       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:20:05.429994       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1009 20:20:05.430121       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:20:05.430133       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:20:05.430145       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:20:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:20:05.701599       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:20:05.701694       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:20:05.701706       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:20:05.703147       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 20:20:35.701913       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1009 20:20:35.702902       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 20:20:35.703009       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1009 20:20:35.702914       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1009 20:20:37.302449       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 20:20:37.302501       1 metrics.go:72] Registering metrics
	I1009 20:20:37.302592       1 controller.go:711] "Syncing nftables rules"
	I1009 20:20:45.651694       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 20:20:45.651842       1 main.go:301] handling current node
	I1009 20:20:55.649286       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 20:20:55.649340       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1de1928d9c10a7383f82f9d07f373a124ba301e004ce8acd88dd8a940cd3c874] <==
	I1009 20:20:04.042642       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 20:20:04.080429       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 20:20:04.091759       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 20:20:04.091875       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1009 20:20:04.094468       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 20:20:04.101959       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1009 20:20:04.102153       1 aggregator.go:171] initial CRD sync complete...
	I1009 20:20:04.102174       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 20:20:04.102182       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 20:20:04.102189       1 cache.go:39] Caches are synced for autoregister controller
	I1009 20:20:04.104284       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 20:20:04.104319       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 20:20:04.118910       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1009 20:20:04.135816       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:20:04.530122       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:20:04.564858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 20:20:05.260733       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 20:20:05.436599       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 20:20:05.563887       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:20:05.686194       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:20:05.931405       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.85.27"}
	I1009 20:20:05.950344       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.155.67"}
	I1009 20:20:07.910966       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 20:20:08.361489       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 20:20:08.459373       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [263af593d94482c92965e6f0511548fd1ccf9f2292e732c23158498a550ac2a4] <==
	I1009 20:20:07.910155       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 20:20:07.916178       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1009 20:20:07.916538       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1009 20:20:07.916603       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 20:20:07.916648       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 20:20:07.916678       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 20:20:07.916720       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 20:20:07.917385       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 20:20:07.917466       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 20:20:07.918814       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1009 20:20:07.921262       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 20:20:07.927555       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:20:07.927616       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 20:20:07.927625       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 20:20:07.933707       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 20:20:07.940324       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1009 20:20:07.941587       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1009 20:20:07.952454       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:20:07.952554       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1009 20:20:07.953690       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 20:20:07.953763       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1009 20:20:07.953729       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 20:20:07.953748       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 20:20:07.960475       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 20:20:07.964763       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	
	
	==> kube-proxy [5bcf7f81c448e41e559806602e3f3a1d94582cbf78df0ab117caa5f14d6ba76a] <==
	I1009 20:20:05.895866       1 server_linux.go:53] "Using iptables proxy"
	I1009 20:20:06.067019       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 20:20:06.168141       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 20:20:06.168287       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1009 20:20:06.168451       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:20:06.205472       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:20:06.205594       1 server_linux.go:132] "Using iptables Proxier"
	I1009 20:20:06.212535       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:20:06.212977       1 server.go:527] "Version info" version="v1.34.1"
	I1009 20:20:06.213448       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:20:06.214872       1 config.go:200] "Starting service config controller"
	I1009 20:20:06.214939       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 20:20:06.214986       1 config.go:106] "Starting endpoint slice config controller"
	I1009 20:20:06.215012       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 20:20:06.215049       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 20:20:06.215076       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 20:20:06.215736       1 config.go:309] "Starting node config controller"
	I1009 20:20:06.219098       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 20:20:06.219230       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 20:20:06.315111       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1009 20:20:06.315116       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 20:20:06.315155       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e15b99435508a3068f9f9d4d692dd1bd7f56391601b5b0179b6642e79aa3078f] <==
	I1009 20:20:02.500099       1 serving.go:386] Generated self-signed cert in-memory
	I1009 20:20:06.249744       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 20:20:06.249784       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:20:06.256123       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 20:20:06.256172       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 20:20:06.256737       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:20:06.256759       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:20:06.256776       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:20:06.256876       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:20:06.257486       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 20:20:06.259049       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 20:20:06.356900       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:20:06.356981       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 20:20:06.357207       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 09 20:20:08 embed-certs-565110 kubelet[777]: I1009 20:20:08.687028     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbm8v\" (UniqueName: \"kubernetes.io/projected/def1cb05-75eb-47cd-8733-e75e6c64ee66-kube-api-access-tbm8v\") pod \"kubernetes-dashboard-855c9754f9-f7ckg\" (UID: \"def1cb05-75eb-47cd-8733-e75e6c64ee66\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f7ckg"
	Oct 09 20:20:08 embed-certs-565110 kubelet[777]: I1009 20:20:08.687047     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e156a081-45c8-46ba-b291-eb3db7e6a867-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-wvnph\" (UID: \"e156a081-45c8-46ba-b291-eb3db7e6a867\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph"
	Oct 09 20:20:08 embed-certs-565110 kubelet[777]: W1009 20:20:08.875887     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85/crio-2f659c12feea9f2751421d4bf9f866b8a4d322a26a32a78119f47d7e06ea1eec WatchSource:0}: Error finding container 2f659c12feea9f2751421d4bf9f866b8a4d322a26a32a78119f47d7e06ea1eec: Status 404 returned error can't find the container with id 2f659c12feea9f2751421d4bf9f866b8a4d322a26a32a78119f47d7e06ea1eec
	Oct 09 20:20:08 embed-certs-565110 kubelet[777]: I1009 20:20:08.950928     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 09 20:20:16 embed-certs-565110 kubelet[777]: I1009 20:20:16.929571     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f7ckg" podStartSLOduration=4.369183779 podStartE2EDuration="8.929550865s" podCreationTimestamp="2025-10-09 20:20:08 +0000 UTC" firstStartedPulling="2025-10-09 20:20:08.880957918 +0000 UTC m=+10.546764731" lastFinishedPulling="2025-10-09 20:20:13.441324996 +0000 UTC m=+15.107131817" observedRunningTime="2025-10-09 20:20:13.75551031 +0000 UTC m=+15.421317131" watchObservedRunningTime="2025-10-09 20:20:16.929550865 +0000 UTC m=+18.595357678"
	Oct 09 20:20:18 embed-certs-565110 kubelet[777]: I1009 20:20:18.758484     777 scope.go:117] "RemoveContainer" containerID="68fbac12b13e362fa52bbe7a7f1e90975d503cfe1178d1d230a132115e805b5f"
	Oct 09 20:20:19 embed-certs-565110 kubelet[777]: I1009 20:20:19.762345     777 scope.go:117] "RemoveContainer" containerID="68fbac12b13e362fa52bbe7a7f1e90975d503cfe1178d1d230a132115e805b5f"
	Oct 09 20:20:19 embed-certs-565110 kubelet[777]: I1009 20:20:19.762637     777 scope.go:117] "RemoveContainer" containerID="e8aee7e08506cd1a9ede8f256966cdb4f2af7592db4e2681c5a20e429af60fac"
	Oct 09 20:20:19 embed-certs-565110 kubelet[777]: E1009 20:20:19.762785     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wvnph_kubernetes-dashboard(e156a081-45c8-46ba-b291-eb3db7e6a867)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph" podUID="e156a081-45c8-46ba-b291-eb3db7e6a867"
	Oct 09 20:20:20 embed-certs-565110 kubelet[777]: I1009 20:20:20.766348     777 scope.go:117] "RemoveContainer" containerID="e8aee7e08506cd1a9ede8f256966cdb4f2af7592db4e2681c5a20e429af60fac"
	Oct 09 20:20:20 embed-certs-565110 kubelet[777]: E1009 20:20:20.766979     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wvnph_kubernetes-dashboard(e156a081-45c8-46ba-b291-eb3db7e6a867)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph" podUID="e156a081-45c8-46ba-b291-eb3db7e6a867"
	Oct 09 20:20:28 embed-certs-565110 kubelet[777]: I1009 20:20:28.895834     777 scope.go:117] "RemoveContainer" containerID="e8aee7e08506cd1a9ede8f256966cdb4f2af7592db4e2681c5a20e429af60fac"
	Oct 09 20:20:29 embed-certs-565110 kubelet[777]: I1009 20:20:29.790330     777 scope.go:117] "RemoveContainer" containerID="e8aee7e08506cd1a9ede8f256966cdb4f2af7592db4e2681c5a20e429af60fac"
	Oct 09 20:20:29 embed-certs-565110 kubelet[777]: I1009 20:20:29.790653     777 scope.go:117] "RemoveContainer" containerID="02a0a6e50a9df815b7a8e5622e75cf0fd7bb066c9f4c849fe03efa883d3c54e9"
	Oct 09 20:20:29 embed-certs-565110 kubelet[777]: E1009 20:20:29.790816     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wvnph_kubernetes-dashboard(e156a081-45c8-46ba-b291-eb3db7e6a867)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph" podUID="e156a081-45c8-46ba-b291-eb3db7e6a867"
	Oct 09 20:20:35 embed-certs-565110 kubelet[777]: I1009 20:20:35.807384     777 scope.go:117] "RemoveContainer" containerID="3717764bae0d2e9c480c451663d8436220a28e339f7ea5f728f760e6db2361d2"
	Oct 09 20:20:38 embed-certs-565110 kubelet[777]: I1009 20:20:38.895085     777 scope.go:117] "RemoveContainer" containerID="02a0a6e50a9df815b7a8e5622e75cf0fd7bb066c9f4c849fe03efa883d3c54e9"
	Oct 09 20:20:38 embed-certs-565110 kubelet[777]: E1009 20:20:38.895260     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wvnph_kubernetes-dashboard(e156a081-45c8-46ba-b291-eb3db7e6a867)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph" podUID="e156a081-45c8-46ba-b291-eb3db7e6a867"
	Oct 09 20:20:52 embed-certs-565110 kubelet[777]: I1009 20:20:52.560135     777 scope.go:117] "RemoveContainer" containerID="02a0a6e50a9df815b7a8e5622e75cf0fd7bb066c9f4c849fe03efa883d3c54e9"
	Oct 09 20:20:52 embed-certs-565110 kubelet[777]: I1009 20:20:52.854258     777 scope.go:117] "RemoveContainer" containerID="02a0a6e50a9df815b7a8e5622e75cf0fd7bb066c9f4c849fe03efa883d3c54e9"
	Oct 09 20:20:52 embed-certs-565110 kubelet[777]: I1009 20:20:52.854925     777 scope.go:117] "RemoveContainer" containerID="5a38898751c3190370fb093a21e09fb35270396f2335f7cf298f3ffeb676eab4"
	Oct 09 20:20:52 embed-certs-565110 kubelet[777]: E1009 20:20:52.855472     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wvnph_kubernetes-dashboard(e156a081-45c8-46ba-b291-eb3db7e6a867)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph" podUID="e156a081-45c8-46ba-b291-eb3db7e6a867"
	Oct 09 20:20:53 embed-certs-565110 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 20:20:53 embed-certs-565110 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 20:20:53 embed-certs-565110 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [dcee20808a0ab7b88a286f7a9fa5402833491c3468c0d98c6e6e41a4d387aeca] <==
	2025/10/09 20:20:13 Using namespace: kubernetes-dashboard
	2025/10/09 20:20:13 Using in-cluster config to connect to apiserver
	2025/10/09 20:20:13 Using secret token for csrf signing
	2025/10/09 20:20:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/09 20:20:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/09 20:20:13 Successful initial request to the apiserver, version: v1.34.1
	2025/10/09 20:20:13 Generating JWE encryption key
	2025/10/09 20:20:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/09 20:20:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/09 20:20:14 Initializing JWE encryption key from synchronized object
	2025/10/09 20:20:14 Creating in-cluster Sidecar client
	2025/10/09 20:20:14 Serving insecurely on HTTP port: 9090
	2025/10/09 20:20:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 20:20:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 20:20:13 Starting overwatch
	
	
	==> storage-provisioner [0f270941e80ca04066ea9be417daf5c6ce5c2ec0888d5bfff2efb8528aeb3c92] <==
	I1009 20:20:35.871509       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:20:35.883797       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:20:35.884015       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 20:20:35.886498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:39.341394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:43.601678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:47.200520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:50.254789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:53.277382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:53.283661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:20:53.283805       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:20:53.283969       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-565110_e43d944c-3ef4-4e52-90da-927b254e84de!
	I1009 20:20:53.284891       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"363137e6-edc1-40e3-81f2-14e316bf471f", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-565110_e43d944c-3ef4-4e52-90da-927b254e84de became leader
	W1009 20:20:53.298056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:53.301962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:20:53.384144       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-565110_e43d944c-3ef4-4e52-90da-927b254e84de!
	W1009 20:20:55.304434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:55.308876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [3717764bae0d2e9c480c451663d8436220a28e339f7ea5f728f760e6db2361d2] <==
	I1009 20:20:05.525842       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 20:20:35.669788       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-565110 -n embed-certs-565110
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-565110 -n embed-certs-565110: exit status 2 (370.176441ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-565110 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-565110
helpers_test.go:243: (dbg) docker inspect embed-certs-565110:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85",
	        "Created": "2025-10-09T20:18:08.202138688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496051,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:19:51.517235018Z",
	            "FinishedAt": "2025-10-09T20:19:50.210409037Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85/hostname",
	        "HostsPath": "/var/lib/docker/containers/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85/hosts",
	        "LogPath": "/var/lib/docker/containers/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85-json.log",
	        "Name": "/embed-certs-565110",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-565110:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-565110",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85",
	                "LowerDir": "/var/lib/docker/overlay2/1f20732bafca7c4ec6bbe75518ab73ef01fcee46e54c892cfb75e2f68114dce6-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f20732bafca7c4ec6bbe75518ab73ef01fcee46e54c892cfb75e2f68114dce6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f20732bafca7c4ec6bbe75518ab73ef01fcee46e54c892cfb75e2f68114dce6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f20732bafca7c4ec6bbe75518ab73ef01fcee46e54c892cfb75e2f68114dce6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-565110",
	                "Source": "/var/lib/docker/volumes/embed-certs-565110/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-565110",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-565110",
	                "name.minikube.sigs.k8s.io": "embed-certs-565110",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "058cbe69479446d85a6f1bc48737b773fb287e5104f6524338b6de4704a814e8",
	            "SandboxKey": "/var/run/docker/netns/058cbe694794",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-565110": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:2a:e8:98:bc:da",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c39245925c93cf03ed8abe3702c98fe11aa5fe2a748150abd863ee2a4578bafb",
	                    "EndpointID": "ca2e0a22e65243efa4cba5a9e2cf1e701a9f79cd2b5e6bf91b735f6228dcb83f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-565110",
	                        "5db0c011c608"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-565110 -n embed-certs-565110
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-565110 -n embed-certs-565110: exit status 2 (345.251458ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-565110 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-565110 logs -n 25: (1.25209262s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-670649 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:16 UTC │
	│ start   │ -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:16 UTC │ 09 Oct 25 20:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-020313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	│ stop    │ -p no-preload-020313 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ image   │ old-k8s-version-670649 image list --format=json                                                                                                                                                                                               │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ pause   │ -p old-k8s-version-670649 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-020313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ start   │ -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:18 UTC │
	│ delete  │ -p old-k8s-version-670649                                                                                                                                                                                                                     │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ delete  │ -p old-k8s-version-670649                                                                                                                                                                                                                     │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:18 UTC │
	│ start   │ -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:19 UTC │
	│ image   │ no-preload-020313 image list --format=json                                                                                                                                                                                                    │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:18 UTC │
	│ pause   │ -p no-preload-020313 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │                     │
	│ delete  │ -p no-preload-020313                                                                                                                                                                                                                          │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ delete  │ -p no-preload-020313                                                                                                                                                                                                                          │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ delete  │ -p disable-driver-mounts-613966                                                                                                                                                                                                               │ disable-driver-mounts-613966 │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ start   │ -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-565110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │                     │
	│ stop    │ -p embed-certs-565110 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-565110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ start   │ -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-417984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-417984 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	│ image   │ embed-certs-565110 image list --format=json                                                                                                                                                                                                   │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:20 UTC │
	│ pause   │ -p embed-certs-565110 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:19:51
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:19:51.102892  495888 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:19:51.103497  495888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:19:51.103530  495888 out.go:374] Setting ErrFile to fd 2...
	I1009 20:19:51.103552  495888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:19:51.103850  495888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:19:51.104352  495888 out.go:368] Setting JSON to false
	I1009 20:19:51.105472  495888 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10931,"bootTime":1760030261,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:19:51.105578  495888 start.go:143] virtualization:  
	I1009 20:19:51.109357  495888 out.go:179] * [embed-certs-565110] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:19:51.112633  495888 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:19:51.112721  495888 notify.go:221] Checking for updates...
	I1009 20:19:51.116857  495888 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:19:51.119909  495888 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:19:51.122999  495888 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:19:51.126053  495888 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:19:51.131523  495888 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:19:51.136756  495888 config.go:182] Loaded profile config "embed-certs-565110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:19:51.137593  495888 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:19:51.187119  495888 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:19:51.187242  495888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:19:51.297737  495888 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:19:51.282094864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:19:51.297886  495888 docker.go:319] overlay module found
	I1009 20:19:51.303198  495888 out.go:179] * Using the docker driver based on existing profile
	I1009 20:19:51.306311  495888 start.go:309] selected driver: docker
	I1009 20:19:51.306342  495888 start.go:930] validating driver "docker" against &{Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:19:51.306461  495888 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:19:51.307345  495888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:19:51.421645  495888 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:19:51.408696868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:19:51.422009  495888 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:19:51.422036  495888 cni.go:84] Creating CNI manager for ""
	I1009 20:19:51.422095  495888 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:19:51.422133  495888 start.go:353] cluster config:
	{Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:19:51.425473  495888 out.go:179] * Starting "embed-certs-565110" primary control-plane node in "embed-certs-565110" cluster
	I1009 20:19:51.428909  495888 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:19:51.431951  495888 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:19:49.888113  492745 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:19:49.888143  492745 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:19:49.888230  492745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:19:49.908801  492745 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:19:49.908825  492745 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:19:49.908912  492745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:19:49.938475  492745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:19:49.953320  492745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:19:50.098321  492745 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 20:19:50.189335  492745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:19:50.195949  492745 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:19:50.304820  492745 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:19:50.669429  492745 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1009 20:19:50.671170  492745 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-417984" to be "Ready" ...
	I1009 20:19:51.187848  492745 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-417984" context rescaled to 1 replicas
	I1009 20:19:51.426572  492745 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.230578231s)
	I1009 20:19:51.426624  492745 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.121763432s)
	I1009 20:19:51.451362  492745 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1009 20:19:51.434864  495888 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:19:51.434951  495888 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 20:19:51.434963  495888 cache.go:58] Caching tarball of preloaded images
	I1009 20:19:51.435068  495888 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 20:19:51.435079  495888 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 20:19:51.435197  495888 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/config.json ...
	I1009 20:19:51.435452  495888 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:19:51.457590  495888 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:19:51.457609  495888 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:19:51.457621  495888 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:19:51.457644  495888 start.go:361] acquireMachinesLock for embed-certs-565110: {Name:mk32ec325145c7dbf708685a0b7d3c4450230c14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:19:51.457699  495888 start.go:365] duration metric: took 38.269µs to acquireMachinesLock for "embed-certs-565110"
	I1009 20:19:51.457718  495888 start.go:97] Skipping create...Using existing machine configuration
	I1009 20:19:51.457724  495888 fix.go:55] fixHost starting: 
	I1009 20:19:51.457987  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:51.478706  495888 fix.go:113] recreateIfNeeded on embed-certs-565110: state=Stopped err=<nil>
	W1009 20:19:51.478734  495888 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 20:19:51.454825  492745 addons.go:514] duration metric: took 1.631795731s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1009 20:19:51.482974  495888 out.go:252] * Restarting existing docker container for "embed-certs-565110" ...
	I1009 20:19:51.483091  495888 cli_runner.go:164] Run: docker start embed-certs-565110
	I1009 20:19:51.807227  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:51.842702  495888 kic.go:430] container "embed-certs-565110" state is running.
	I1009 20:19:51.843128  495888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-565110
	I1009 20:19:51.877524  495888 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/config.json ...
	I1009 20:19:51.877765  495888 machine.go:93] provisionDockerMachine start ...
	I1009 20:19:51.877836  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:51.907208  495888 main.go:141] libmachine: Using SSH client type: native
	I1009 20:19:51.907536  495888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1009 20:19:51.907553  495888 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:19:51.908974  495888 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33816->127.0.0.1:33446: read: connection reset by peer
	I1009 20:19:55.081220  495888 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-565110
	
	I1009 20:19:55.081310  495888 ubuntu.go:182] provisioning hostname "embed-certs-565110"
	I1009 20:19:55.081383  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:55.100169  495888 main.go:141] libmachine: Using SSH client type: native
	I1009 20:19:55.100476  495888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1009 20:19:55.100493  495888 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-565110 && echo "embed-certs-565110" | sudo tee /etc/hostname
	I1009 20:19:55.264075  495888 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-565110
	
	I1009 20:19:55.264186  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:55.282454  495888 main.go:141] libmachine: Using SSH client type: native
	I1009 20:19:55.282834  495888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1009 20:19:55.282859  495888 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-565110' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-565110/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-565110' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:19:55.433702  495888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:19:55.433729  495888 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:19:55.433752  495888 ubuntu.go:190] setting up certificates
	I1009 20:19:55.433762  495888 provision.go:84] configureAuth start
	I1009 20:19:55.433835  495888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-565110
	I1009 20:19:55.451034  495888 provision.go:143] copyHostCerts
	I1009 20:19:55.451107  495888 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:19:55.451131  495888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:19:55.451208  495888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:19:55.451360  495888 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:19:55.451370  495888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:19:55.451400  495888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:19:55.451482  495888 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:19:55.451493  495888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:19:55.451520  495888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:19:55.451581  495888 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.embed-certs-565110 san=[127.0.0.1 192.168.76.2 embed-certs-565110 localhost minikube]
	I1009 20:19:55.723228  495888 provision.go:177] copyRemoteCerts
	I1009 20:19:55.723701  495888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:19:55.723756  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:55.745356  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:55.853673  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:19:55.872520  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 20:19:55.891414  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:19:55.911282  495888 provision.go:87] duration metric: took 477.503506ms to configureAuth
	I1009 20:19:55.911322  495888 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:19:55.911556  495888 config.go:182] Loaded profile config "embed-certs-565110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:19:55.911693  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:55.935681  495888 main.go:141] libmachine: Using SSH client type: native
	I1009 20:19:55.935991  495888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1009 20:19:55.936007  495888 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1009 20:19:52.674126  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:19:54.674242  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:19:56.675208  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	I1009 20:19:56.260763  495888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:19:56.260789  495888 machine.go:96] duration metric: took 4.383005849s to provisionDockerMachine
	I1009 20:19:56.260800  495888 start.go:294] postStartSetup for "embed-certs-565110" (driver="docker")
	I1009 20:19:56.260819  495888 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:19:56.260900  495888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:19:56.260943  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:56.286630  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:56.390555  495888 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:19:56.395007  495888 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:19:56.395034  495888 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:19:56.395044  495888 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:19:56.395097  495888 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:19:56.395176  495888 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:19:56.395272  495888 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:19:56.402958  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:19:56.424396  495888 start.go:297] duration metric: took 163.580707ms for postStartSetup
	I1009 20:19:56.424478  495888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:19:56.424533  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:56.447227  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:56.550726  495888 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:19:56.556179  495888 fix.go:57] duration metric: took 5.098447768s for fixHost
	I1009 20:19:56.556209  495888 start.go:84] releasing machines lock for "embed-certs-565110", held for 5.098501504s
	I1009 20:19:56.556286  495888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-565110
	I1009 20:19:56.573374  495888 ssh_runner.go:195] Run: cat /version.json
	I1009 20:19:56.573416  495888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:19:56.573438  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:56.573478  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:56.593761  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:56.624539  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:56.701349  495888 ssh_runner.go:195] Run: systemctl --version
	I1009 20:19:56.800207  495888 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:19:56.837954  495888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:19:56.842936  495888 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:19:56.843020  495888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:19:56.851187  495888 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 20:19:56.851220  495888 start.go:496] detecting cgroup driver to use...
	I1009 20:19:56.851267  495888 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 20:19:56.851338  495888 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:19:56.868899  495888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:19:56.882641  495888 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:19:56.882748  495888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:19:56.901981  495888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:19:56.922675  495888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:19:57.045263  495888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:19:57.164062  495888 docker.go:234] disabling docker service ...
	I1009 20:19:57.164140  495888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:19:57.182535  495888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:19:57.196529  495888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:19:57.316352  495888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:19:57.436860  495888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:19:57.451031  495888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:19:57.466163  495888 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:19:57.466305  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.475527  495888 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:19:57.475677  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.485065  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.494276  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.503522  495888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:19:57.512068  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.527270  495888 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.536150  495888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:19:57.547538  495888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:19:57.555776  495888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:19:57.563474  495888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:19:57.687781  495888 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:19:57.832964  495888 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:19:57.833043  495888 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:19:57.837082  495888 start.go:564] Will wait 60s for crictl version
	I1009 20:19:57.837268  495888 ssh_runner.go:195] Run: which crictl
	I1009 20:19:57.841002  495888 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:19:57.884119  495888 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:19:57.884206  495888 ssh_runner.go:195] Run: crio --version
	I1009 20:19:57.920601  495888 ssh_runner.go:195] Run: crio --version
	I1009 20:19:57.953231  495888 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 20:19:57.956094  495888 cli_runner.go:164] Run: docker network inspect embed-certs-565110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:19:57.973183  495888 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 20:19:57.977379  495888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:19:57.987566  495888 kubeadm.go:883] updating cluster {Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:19:57.987690  495888 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:19:57.987753  495888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:19:58.034743  495888 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:19:58.034768  495888 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:19:58.034837  495888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:19:58.063612  495888 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:19:58.063641  495888 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:19:58.063649  495888 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 20:19:58.063757  495888 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-565110 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:19:58.063850  495888 ssh_runner.go:195] Run: crio config
	I1009 20:19:58.119226  495888 cni.go:84] Creating CNI manager for ""
	I1009 20:19:58.119250  495888 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:19:58.119270  495888 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 20:19:58.119317  495888 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-565110 NodeName:embed-certs-565110 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:19:58.119477  495888 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-565110"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:19:58.119554  495888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:19:58.127994  495888 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:19:58.128078  495888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:19:58.136084  495888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1009 20:19:58.150168  495888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:19:58.164940  495888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1009 20:19:58.181309  495888 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:19:58.185366  495888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:19:58.195602  495888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:19:58.316882  495888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:19:58.332912  495888 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110 for IP: 192.168.76.2
	I1009 20:19:58.332938  495888 certs.go:195] generating shared ca certs ...
	I1009 20:19:58.332955  495888 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:58.333097  495888 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:19:58.333194  495888 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:19:58.333206  495888 certs.go:257] generating profile certs ...
	I1009 20:19:58.333308  495888 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/client.key
	I1009 20:19:58.333377  495888 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key.e7b9ab9d
	I1009 20:19:58.333427  495888 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.key
	I1009 20:19:58.333542  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:19:58.333574  495888 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:19:58.333587  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:19:58.333618  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:19:58.333645  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:19:58.333674  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:19:58.333723  495888 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:19:58.334393  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:19:58.356891  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:19:58.378429  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:19:58.402388  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:19:58.429843  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1009 20:19:58.457145  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:19:58.482912  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:19:58.511578  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/embed-certs-565110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:19:58.532342  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:19:58.560879  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:19:58.585808  495888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:19:58.606843  495888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:19:58.621525  495888 ssh_runner.go:195] Run: openssl version
	I1009 20:19:58.628148  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:19:58.637529  495888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:19:58.641561  495888 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:19:58.641652  495888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:19:58.687261  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:19:58.695792  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:19:58.704978  495888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:19:58.709478  495888 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:19:58.709569  495888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:19:58.751071  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:19:58.759246  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:19:58.767814  495888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:19:58.772140  495888 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:19:58.772208  495888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:19:58.813601  495888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:19:58.821600  495888 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:19:58.825585  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:19:58.867122  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:19:58.915221  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:19:58.961581  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:19:59.019501  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:19:59.064706  495888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:19:59.118556  495888 kubeadm.go:400] StartCluster: {Name:embed-certs-565110 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-565110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:19:59.118710  495888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:19:59.118804  495888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:19:59.210857  495888 cri.go:89] found id: "1de1928d9c10a7383f82f9d07f373a124ba301e004ce8acd88dd8a940cd3c874"
	I1009 20:19:59.210931  495888 cri.go:89] found id: "263af593d94482c92965e6f0511548fd1ccf9f2292e732c23158498a550ac2a4"
	I1009 20:19:59.210953  495888 cri.go:89] found id: "e15b99435508a3068f9f9d4d692dd1bd7f56391601b5b0179b6642e79aa3078f"
	I1009 20:19:59.210979  495888 cri.go:89] found id: "6d66a1c644fe699013f3d024b65f4dfa2c5f6bb2e344eef4ab51199503d6bb1f"
	I1009 20:19:59.211008  495888 cri.go:89] found id: ""
	I1009 20:19:59.211087  495888 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 20:19:59.238031  495888 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:19:59Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:19:59.238192  495888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:19:59.251518  495888 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 20:19:59.251585  495888 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 20:19:59.251666  495888 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:19:59.267653  495888 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:19:59.268282  495888 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-565110" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:19:59.268619  495888 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-294150/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-565110" cluster setting kubeconfig missing "embed-certs-565110" context setting]
	I1009 20:19:59.269134  495888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:59.270821  495888 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:19:59.290193  495888 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1009 20:19:59.290271  495888 kubeadm.go:601] duration metric: took 38.666826ms to restartPrimaryControlPlane
	I1009 20:19:59.290297  495888 kubeadm.go:402] duration metric: took 171.753193ms to StartCluster
	I1009 20:19:59.290339  495888 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:59.290426  495888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:19:59.292269  495888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:19:59.297424  495888 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:19:59.297656  495888 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:19:59.301070  495888 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-565110"
	I1009 20:19:59.301153  495888 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-565110"
	W1009 20:19:59.301182  495888 addons.go:247] addon storage-provisioner should already be in state true
	I1009 20:19:59.301226  495888 host.go:66] Checking if "embed-certs-565110" exists ...
	I1009 20:19:59.301774  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:59.304957  495888 out.go:179] * Verifying Kubernetes components...
	I1009 20:19:59.308175  495888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:19:59.311519  495888 addons.go:69] Setting default-storageclass=true in profile "embed-certs-565110"
	I1009 20:19:59.311558  495888 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-565110"
	I1009 20:19:59.311891  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:59.321256  495888 addons.go:69] Setting dashboard=true in profile "embed-certs-565110"
	I1009 20:19:59.321286  495888 addons.go:238] Setting addon dashboard=true in "embed-certs-565110"
	W1009 20:19:59.321295  495888 addons.go:247] addon dashboard should already be in state true
	I1009 20:19:59.321329  495888 host.go:66] Checking if "embed-certs-565110" exists ...
	I1009 20:19:59.321807  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:59.297901  495888 config.go:182] Loaded profile config "embed-certs-565110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:19:59.346629  495888 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:19:59.351650  495888 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:19:59.351675  495888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:19:59.351737  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:59.370094  495888 addons.go:238] Setting addon default-storageclass=true in "embed-certs-565110"
	W1009 20:19:59.370120  495888 addons.go:247] addon default-storageclass should already be in state true
	I1009 20:19:59.370146  495888 host.go:66] Checking if "embed-certs-565110" exists ...
	I1009 20:19:59.370575  495888 cli_runner.go:164] Run: docker container inspect embed-certs-565110 --format={{.State.Status}}
	I1009 20:19:59.387047  495888 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 20:19:59.390364  495888 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 20:19:59.396710  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 20:19:59.396746  495888 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 20:19:59.396832  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:59.429347  495888 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:19:59.429370  495888 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:19:59.429437  495888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-565110
	I1009 20:19:59.430849  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:59.464220  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:59.473433  495888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/embed-certs-565110/id_rsa Username:docker}
	I1009 20:19:59.694365  495888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:19:59.717524  495888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:19:59.840086  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 20:19:59.840111  495888 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 20:19:59.864918  495888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:19:59.880095  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 20:19:59.880122  495888 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 20:19:59.900442  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 20:19:59.900468  495888 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 20:19:59.917184  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 20:19:59.917207  495888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 20:19:59.939131  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 20:19:59.939158  495888 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 20:19:59.990478  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 20:19:59.990505  495888 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 20:20:00.111657  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 20:20:00.111680  495888 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 20:20:00.389825  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 20:20:00.389849  495888 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 20:20:00.555191  495888 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 20:20:00.555224  495888 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 20:20:00.588057  495888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1009 20:19:59.174811  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:01.175375  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	I1009 20:20:05.769604  495888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.075202514s)
	I1009 20:20:05.769670  495888 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.052119796s)
	I1009 20:20:05.769700  495888 node_ready.go:35] waiting up to 6m0s for node "embed-certs-565110" to be "Ready" ...
	I1009 20:20:05.770038  495888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.905095554s)
	I1009 20:20:05.812828  495888 node_ready.go:49] node "embed-certs-565110" is "Ready"
	I1009 20:20:05.812865  495888 node_ready.go:38] duration metric: took 43.143299ms for node "embed-certs-565110" to be "Ready" ...
	I1009 20:20:05.812881  495888 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:20:05.812944  495888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:05.956969  495888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.368861055s)
	I1009 20:20:05.957203  495888 api_server.go:72] duration metric: took 6.656317655s to wait for apiserver process to appear ...
	I1009 20:20:05.957223  495888 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:20:05.957275  495888 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:20:05.960249  495888 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-565110 addons enable metrics-server
	
	I1009 20:20:05.963094  495888 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1009 20:20:05.966069  495888 addons.go:514] duration metric: took 6.668415407s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1009 20:20:05.968487  495888 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1009 20:20:05.969658  495888 api_server.go:141] control plane version: v1.34.1
	I1009 20:20:05.969700  495888 api_server.go:131] duration metric: took 12.429949ms to wait for apiserver health ...
	I1009 20:20:05.969710  495888 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:20:05.973374  495888 system_pods.go:59] 8 kube-system pods found
	I1009 20:20:05.973419  495888 system_pods.go:61] "coredns-66bc5c9577-zmqwp" [ff3de144-4c77-4486-be1e-ab88492e6a18] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:05.973429  495888 system_pods.go:61] "etcd-embed-certs-565110" [4ad4c426-96dc-4bd7-bf86-efc6658f3526] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:20:05.973434  495888 system_pods.go:61] "kindnet-mjfwz" [f079f818-4d35-4673-ab85-6b2fe322c9f9] Running
	I1009 20:20:05.973441  495888 system_pods.go:61] "kube-apiserver-embed-certs-565110" [5a497a15-f487-4c78-bf3e-a53c6d9f83db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:20:05.973449  495888 system_pods.go:61] "kube-controller-manager-embed-certs-565110" [7460b871-81b4-49ff-bad1-b30126a8635c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:20:05.973454  495888 system_pods.go:61] "kube-proxy-bhwvw" [f9d0b727-064f-4a1c-88e2-e238e5f43c4b] Running
	I1009 20:20:05.973470  495888 system_pods.go:61] "kube-scheduler-embed-certs-565110" [f706c945-9f4f-4f6d-83f8-c6cddb3ff41d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:20:05.973474  495888 system_pods.go:61] "storage-provisioner" [9811b3ef-6b1c-42ea-a8c8-bdf0028bd024] Running
	I1009 20:20:05.973480  495888 system_pods.go:74] duration metric: took 3.763873ms to wait for pod list to return data ...
	I1009 20:20:05.973491  495888 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:20:05.976144  495888 default_sa.go:45] found service account: "default"
	I1009 20:20:05.976166  495888 default_sa.go:55] duration metric: took 2.669804ms for default service account to be created ...
	I1009 20:20:05.976174  495888 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:20:05.980886  495888 system_pods.go:86] 8 kube-system pods found
	I1009 20:20:05.980930  495888 system_pods.go:89] "coredns-66bc5c9577-zmqwp" [ff3de144-4c77-4486-be1e-ab88492e6a18] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:05.980940  495888 system_pods.go:89] "etcd-embed-certs-565110" [4ad4c426-96dc-4bd7-bf86-efc6658f3526] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:20:05.980946  495888 system_pods.go:89] "kindnet-mjfwz" [f079f818-4d35-4673-ab85-6b2fe322c9f9] Running
	I1009 20:20:05.980955  495888 system_pods.go:89] "kube-apiserver-embed-certs-565110" [5a497a15-f487-4c78-bf3e-a53c6d9f83db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:20:05.980963  495888 system_pods.go:89] "kube-controller-manager-embed-certs-565110" [7460b871-81b4-49ff-bad1-b30126a8635c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:20:05.980968  495888 system_pods.go:89] "kube-proxy-bhwvw" [f9d0b727-064f-4a1c-88e2-e238e5f43c4b] Running
	I1009 20:20:05.980992  495888 system_pods.go:89] "kube-scheduler-embed-certs-565110" [f706c945-9f4f-4f6d-83f8-c6cddb3ff41d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:20:05.981000  495888 system_pods.go:89] "storage-provisioner" [9811b3ef-6b1c-42ea-a8c8-bdf0028bd024] Running
	I1009 20:20:05.981007  495888 system_pods.go:126] duration metric: took 4.827699ms to wait for k8s-apps to be running ...
	I1009 20:20:05.981021  495888 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:20:05.981085  495888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:20:05.997748  495888 system_svc.go:56] duration metric: took 16.717209ms WaitForService to wait for kubelet
	I1009 20:20:05.997790  495888 kubeadm.go:586] duration metric: took 6.696906359s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:20:05.997808  495888 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:20:06.009632  495888 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:20:06.009684  495888 node_conditions.go:123] node cpu capacity is 2
	I1009 20:20:06.009700  495888 node_conditions.go:105] duration metric: took 11.886647ms to run NodePressure ...
	I1009 20:20:06.009715  495888 start.go:242] waiting for startup goroutines ...
	I1009 20:20:06.009723  495888 start.go:247] waiting for cluster config update ...
	I1009 20:20:06.009735  495888 start.go:256] writing updated cluster config ...
	I1009 20:20:06.010128  495888 ssh_runner.go:195] Run: rm -f paused
	I1009 20:20:06.015468  495888 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:20:06.020318  495888 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zmqwp" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 20:20:03.675557  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:06.175568  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:08.026794  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:10.028306  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:08.674193  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:11.174725  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:12.030211  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:14.031972  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:13.174928  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:15.674695  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:16.527708  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:19.026340  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:17.675546  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:20.174799  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:21.526662  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:24.026394  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:26.026883  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:22.674016  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:24.674813  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:28.027437  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:30.082635  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:27.174625  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	W1009 20:20:29.675082  492745 node_ready.go:57] node "default-k8s-diff-port-417984" has "Ready":"False" status (will retry)
	I1009 20:20:32.174806  492745 node_ready.go:49] node "default-k8s-diff-port-417984" is "Ready"
	I1009 20:20:32.174843  492745 node_ready.go:38] duration metric: took 41.503651759s for node "default-k8s-diff-port-417984" to be "Ready" ...
	I1009 20:20:32.174857  492745 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:20:32.174913  492745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:32.190088  492745 api_server.go:72] duration metric: took 42.367482347s to wait for apiserver process to appear ...
	I1009 20:20:32.190112  492745 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:20:32.190133  492745 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1009 20:20:32.198669  492745 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1009 20:20:32.199868  492745 api_server.go:141] control plane version: v1.34.1
	I1009 20:20:32.199893  492745 api_server.go:131] duration metric: took 9.773485ms to wait for apiserver health ...
	I1009 20:20:32.199901  492745 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:20:32.205780  492745 system_pods.go:59] 8 kube-system pods found
	I1009 20:20:32.205895  492745 system_pods.go:61] "coredns-66bc5c9577-4c2vb" [1372d4eb-13df-43ba-add1-18330c9c110d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:32.205919  492745 system_pods.go:61] "etcd-default-k8s-diff-port-417984" [2f46d319-463a-4bf1-b9f0-33d017fe17c5] Running
	I1009 20:20:32.205965  492745 system_pods.go:61] "kindnet-s57gp" [c69cde96-0e11-4f41-a715-961981d36066] Running
	I1009 20:20:32.205987  492745 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-417984" [fff706cc-3c18-400c-9fb7-10cec1723bc7] Running
	I1009 20:20:32.206011  492745 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-417984" [b8ecf531-e830-4a99-abcc-1fc8175c1598] Running
	I1009 20:20:32.206050  492745 system_pods.go:61] "kube-proxy-jnlzf" [c888f2c2-aaea-43d1-b81a-fe2762b4f733] Running
	I1009 20:20:32.206087  492745 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-417984" [27737a80-8846-4a8f-b4c6-2845ddca3cca] Running
	I1009 20:20:32.206111  492745 system_pods.go:61] "storage-provisioner" [35085697-b4c2-4265-a1eb-2ced25791f19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:20:32.206136  492745 system_pods.go:74] duration metric: took 6.227019ms to wait for pod list to return data ...
	I1009 20:20:32.206167  492745 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:20:32.209460  492745 default_sa.go:45] found service account: "default"
	I1009 20:20:32.209485  492745 default_sa.go:55] duration metric: took 3.292588ms for default service account to be created ...
	I1009 20:20:32.209494  492745 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:20:32.213461  492745 system_pods.go:86] 8 kube-system pods found
	I1009 20:20:32.213493  492745 system_pods.go:89] "coredns-66bc5c9577-4c2vb" [1372d4eb-13df-43ba-add1-18330c9c110d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:32.213501  492745 system_pods.go:89] "etcd-default-k8s-diff-port-417984" [2f46d319-463a-4bf1-b9f0-33d017fe17c5] Running
	I1009 20:20:32.213507  492745 system_pods.go:89] "kindnet-s57gp" [c69cde96-0e11-4f41-a715-961981d36066] Running
	I1009 20:20:32.213512  492745 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417984" [fff706cc-3c18-400c-9fb7-10cec1723bc7] Running
	I1009 20:20:32.213516  492745 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417984" [b8ecf531-e830-4a99-abcc-1fc8175c1598] Running
	I1009 20:20:32.213521  492745 system_pods.go:89] "kube-proxy-jnlzf" [c888f2c2-aaea-43d1-b81a-fe2762b4f733] Running
	I1009 20:20:32.213525  492745 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417984" [27737a80-8846-4a8f-b4c6-2845ddca3cca] Running
	I1009 20:20:32.213530  492745 system_pods.go:89] "storage-provisioner" [35085697-b4c2-4265-a1eb-2ced25791f19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:20:32.213551  492745 retry.go:31] will retry after 208.993801ms: missing components: kube-dns
	I1009 20:20:32.426651  492745 system_pods.go:86] 8 kube-system pods found
	I1009 20:20:32.426688  492745 system_pods.go:89] "coredns-66bc5c9577-4c2vb" [1372d4eb-13df-43ba-add1-18330c9c110d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:32.426697  492745 system_pods.go:89] "etcd-default-k8s-diff-port-417984" [2f46d319-463a-4bf1-b9f0-33d017fe17c5] Running
	I1009 20:20:32.426706  492745 system_pods.go:89] "kindnet-s57gp" [c69cde96-0e11-4f41-a715-961981d36066] Running
	I1009 20:20:32.426710  492745 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417984" [fff706cc-3c18-400c-9fb7-10cec1723bc7] Running
	I1009 20:20:32.426715  492745 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417984" [b8ecf531-e830-4a99-abcc-1fc8175c1598] Running
	I1009 20:20:32.426720  492745 system_pods.go:89] "kube-proxy-jnlzf" [c888f2c2-aaea-43d1-b81a-fe2762b4f733] Running
	I1009 20:20:32.426724  492745 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417984" [27737a80-8846-4a8f-b4c6-2845ddca3cca] Running
	I1009 20:20:32.426729  492745 system_pods.go:89] "storage-provisioner" [35085697-b4c2-4265-a1eb-2ced25791f19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:20:32.426744  492745 retry.go:31] will retry after 247.744501ms: missing components: kube-dns
	I1009 20:20:32.678852  492745 system_pods.go:86] 8 kube-system pods found
	I1009 20:20:32.678888  492745 system_pods.go:89] "coredns-66bc5c9577-4c2vb" [1372d4eb-13df-43ba-add1-18330c9c110d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:20:32.678896  492745 system_pods.go:89] "etcd-default-k8s-diff-port-417984" [2f46d319-463a-4bf1-b9f0-33d017fe17c5] Running
	I1009 20:20:32.678902  492745 system_pods.go:89] "kindnet-s57gp" [c69cde96-0e11-4f41-a715-961981d36066] Running
	I1009 20:20:32.678906  492745 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417984" [fff706cc-3c18-400c-9fb7-10cec1723bc7] Running
	I1009 20:20:32.678910  492745 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417984" [b8ecf531-e830-4a99-abcc-1fc8175c1598] Running
	I1009 20:20:32.678914  492745 system_pods.go:89] "kube-proxy-jnlzf" [c888f2c2-aaea-43d1-b81a-fe2762b4f733] Running
	I1009 20:20:32.678918  492745 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417984" [27737a80-8846-4a8f-b4c6-2845ddca3cca] Running
	I1009 20:20:32.678928  492745 system_pods.go:89] "storage-provisioner" [35085697-b4c2-4265-a1eb-2ced25791f19] Running
	I1009 20:20:32.678941  492745 system_pods.go:126] duration metric: took 469.440984ms to wait for k8s-apps to be running ...
	I1009 20:20:32.678952  492745 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:20:32.679021  492745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:20:32.695608  492745 system_svc.go:56] duration metric: took 16.635802ms WaitForService to wait for kubelet
	I1009 20:20:32.695641  492745 kubeadm.go:586] duration metric: took 42.873046641s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:20:32.695745  492745 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:20:32.699281  492745 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:20:32.699327  492745 node_conditions.go:123] node cpu capacity is 2
	I1009 20:20:32.699341  492745 node_conditions.go:105] duration metric: took 3.588625ms to run NodePressure ...
	I1009 20:20:32.699353  492745 start.go:242] waiting for startup goroutines ...
	I1009 20:20:32.699362  492745 start.go:247] waiting for cluster config update ...
	I1009 20:20:32.699378  492745 start.go:256] writing updated cluster config ...
	I1009 20:20:32.699753  492745 ssh_runner.go:195] Run: rm -f paused
	I1009 20:20:32.704321  492745 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:20:32.708386  492745 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4c2vb" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.714486  492745 pod_ready.go:94] pod "coredns-66bc5c9577-4c2vb" is "Ready"
	I1009 20:20:33.714516  492745 pod_ready.go:86] duration metric: took 1.006106251s for pod "coredns-66bc5c9577-4c2vb" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.717510  492745 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.724030  492745 pod_ready.go:94] pod "etcd-default-k8s-diff-port-417984" is "Ready"
	I1009 20:20:33.724067  492745 pod_ready.go:86] duration metric: took 6.523752ms for pod "etcd-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.727654  492745 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.732867  492745 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-417984" is "Ready"
	I1009 20:20:33.732899  492745 pod_ready.go:86] duration metric: took 5.219538ms for pod "kube-apiserver-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.735435  492745 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:33.914645  492745 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-417984" is "Ready"
	I1009 20:20:33.914725  492745 pod_ready.go:86] duration metric: took 179.260924ms for pod "kube-controller-manager-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:34.112850  492745 pod_ready.go:83] waiting for pod "kube-proxy-jnlzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:34.512679  492745 pod_ready.go:94] pod "kube-proxy-jnlzf" is "Ready"
	I1009 20:20:34.512722  492745 pod_ready.go:86] duration metric: took 399.843804ms for pod "kube-proxy-jnlzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:34.713169  492745 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:35.113508  492745 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-417984" is "Ready"
	I1009 20:20:35.113547  492745 pod_ready.go:86] duration metric: took 400.349632ms for pod "kube-scheduler-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:35.113560  492745 pod_ready.go:40] duration metric: took 2.409163956s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:20:35.180770  492745 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:20:35.185314  492745 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-417984" cluster and "default" namespace by default
	W1009 20:20:32.526518  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:35.026457  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	W1009 20:20:37.028006  495888 pod_ready.go:104] pod "coredns-66bc5c9577-zmqwp" is not "Ready", error: <nil>
	I1009 20:20:39.026861  495888 pod_ready.go:94] pod "coredns-66bc5c9577-zmqwp" is "Ready"
	I1009 20:20:39.026887  495888 pod_ready.go:86] duration metric: took 33.006531676s for pod "coredns-66bc5c9577-zmqwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.029834  495888 pod_ready.go:83] waiting for pod "etcd-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.039610  495888 pod_ready.go:94] pod "etcd-embed-certs-565110" is "Ready"
	I1009 20:20:39.039636  495888 pod_ready.go:86] duration metric: took 9.73968ms for pod "etcd-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.042389  495888 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.047521  495888 pod_ready.go:94] pod "kube-apiserver-embed-certs-565110" is "Ready"
	I1009 20:20:39.047551  495888 pod_ready.go:86] duration metric: took 5.132432ms for pod "kube-apiserver-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.050305  495888 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.224004  495888 pod_ready.go:94] pod "kube-controller-manager-embed-certs-565110" is "Ready"
	I1009 20:20:39.224037  495888 pod_ready.go:86] duration metric: took 173.70233ms for pod "kube-controller-manager-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.424038  495888 pod_ready.go:83] waiting for pod "kube-proxy-bhwvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:39.823381  495888 pod_ready.go:94] pod "kube-proxy-bhwvw" is "Ready"
	I1009 20:20:39.823451  495888 pod_ready.go:86] duration metric: took 399.38654ms for pod "kube-proxy-bhwvw" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:40.043782  495888 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:40.424476  495888 pod_ready.go:94] pod "kube-scheduler-embed-certs-565110" is "Ready"
	I1009 20:20:40.424500  495888 pod_ready.go:86] duration metric: took 380.690278ms for pod "kube-scheduler-embed-certs-565110" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:20:40.424512  495888 pod_ready.go:40] duration metric: took 34.409013666s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:20:40.482252  495888 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:20:40.485424  495888 out.go:179] * Done! kubectl is now configured to use "embed-certs-565110" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.659042377Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.664574511Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.664612296Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.664629486Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.668372205Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.668407873Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.668425596Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.672822675Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.672924904Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.67295482Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.676878104Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:20:45 embed-certs-565110 crio[651]: time="2025-10-09T20:20:45.676917473Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.560850754Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0d7676a1-e960-4971-ad41-92bd72a13986 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.562747755Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7a44e91f-86cf-41a5-bec2-b5582d19b765 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.563758657Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph/dashboard-metrics-scraper" id=332fcc8b-fd5d-406a-887f-bd2bc413cf04 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.563972764Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.578401849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.579001718Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.604121862Z" level=info msg="Created container 5a38898751c3190370fb093a21e09fb35270396f2335f7cf298f3ffeb676eab4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph/dashboard-metrics-scraper" id=332fcc8b-fd5d-406a-887f-bd2bc413cf04 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.605368738Z" level=info msg="Starting container: 5a38898751c3190370fb093a21e09fb35270396f2335f7cf298f3ffeb676eab4" id=39ef6579-22b9-4fde-9b6c-a5822d9dfbbd name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.608477668Z" level=info msg="Started container" PID=1735 containerID=5a38898751c3190370fb093a21e09fb35270396f2335f7cf298f3ffeb676eab4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph/dashboard-metrics-scraper id=39ef6579-22b9-4fde-9b6c-a5822d9dfbbd name=/runtime.v1.RuntimeService/StartContainer sandboxID=88693c847ec07cbc177bebd52670c8cf497a87577e562e09e963df26cb1f2eae
	Oct 09 20:20:52 embed-certs-565110 conmon[1733]: conmon 5a38898751c3190370fb <ninfo>: container 1735 exited with status 1
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.856397595Z" level=info msg="Removing container: 02a0a6e50a9df815b7a8e5622e75cf0fd7bb066c9f4c849fe03efa883d3c54e9" id=d9f728bb-fd89-49b9-a1f4-7a1269093203 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.866793019Z" level=info msg="Error loading conmon cgroup of container 02a0a6e50a9df815b7a8e5622e75cf0fd7bb066c9f4c849fe03efa883d3c54e9: cgroup deleted" id=d9f728bb-fd89-49b9-a1f4-7a1269093203 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 20:20:52 embed-certs-565110 crio[651]: time="2025-10-09T20:20:52.870283362Z" level=info msg="Removed container 02a0a6e50a9df815b7a8e5622e75cf0fd7bb066c9f4c849fe03efa883d3c54e9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph/dashboard-metrics-scraper" id=d9f728bb-fd89-49b9-a1f4-7a1269093203 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5a38898751c31       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago       Exited              dashboard-metrics-scraper   3                   88693c847ec07       dashboard-metrics-scraper-6ffb444bf9-wvnph   kubernetes-dashboard
	0f270941e80ca       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   07ed3ab732764       storage-provisioner                          kube-system
	dcee20808a0ab       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago      Running             kubernetes-dashboard        0                   2f659c12feea9       kubernetes-dashboard-855c9754f9-f7ckg        kubernetes-dashboard
	a5b26ac31259e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   cf6d3d0163c79       busybox                                      default
	6dd3b6f8859b6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago      Running             coredns                     1                   c254ad9d2fd5e       coredns-66bc5c9577-zmqwp                     kube-system
	19c60abb724c1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago      Running             kindnet-cni                 1                   560c70353a458       kindnet-mjfwz                                kube-system
	3717764bae0d2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago      Exited              storage-provisioner         1                   07ed3ab732764       storage-provisioner                          kube-system
	5bcf7f81c448e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago      Running             kube-proxy                  1                   63bc433297ab8       kube-proxy-bhwvw                             kube-system
	1de1928d9c10a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   e775b526799a8       kube-apiserver-embed-certs-565110            kube-system
	263af593d9448       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   b3ca0480c2694       kube-controller-manager-embed-certs-565110   kube-system
	e15b99435508a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   6bc2934ce3cd2       kube-scheduler-embed-certs-565110            kube-system
	6d66a1c644fe6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   54d46624eb736       etcd-embed-certs-565110                      kube-system
	
	
	==> coredns [6dd3b6f8859b6b73158b023597467ffd3bfbf74dba8207996ffb59ba32b783e5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35024 - 6237 "HINFO IN 4570175342769222036.437017538883807812. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014932234s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-565110
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-565110
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=embed-certs-565110
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T20_18_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 20:18:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-565110
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 20:20:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 20:20:35 +0000   Thu, 09 Oct 2025 20:18:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 20:20:35 +0000   Thu, 09 Oct 2025 20:18:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 20:20:35 +0000   Thu, 09 Oct 2025 20:18:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 20:20:35 +0000   Thu, 09 Oct 2025 20:19:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-565110
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 832389f4a3984c1ba73cd231980de142
	  System UUID:                b35d8597-f430-4f2f-bbdb-0cd122e89c1c
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-zmqwp                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-embed-certs-565110                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m24s
	  kube-system                 kindnet-mjfwz                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-embed-certs-565110             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-embed-certs-565110    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-bhwvw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-embed-certs-565110             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wvnph    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-f7ckg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m16s              kube-proxy       
	  Normal   Starting                 52s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m23s              kubelet          Node embed-certs-565110 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m23s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m23s              kubelet          Node embed-certs-565110 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m23s              kubelet          Node embed-certs-565110 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m23s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m19s              node-controller  Node embed-certs-565110 event: Registered Node embed-certs-565110 in Controller
	  Normal   NodeReady                96s                kubelet          Node embed-certs-565110 status is now: NodeReady
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node embed-certs-565110 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node embed-certs-565110 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node embed-certs-565110 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                node-controller  Node embed-certs-565110 event: Registered Node embed-certs-565110 in Controller
	
	
	==> dmesg <==
	[Oct 9 19:49] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:50] overlayfs: idmapped layers are currently not supported
	[ +27.967875] overlayfs: idmapped layers are currently not supported
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:04] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:15] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:16] overlayfs: idmapped layers are currently not supported
	[ +23.810739] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:18] overlayfs: idmapped layers are currently not supported
	[ +26.082927] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:19] overlayfs: idmapped layers are currently not supported
	[ +21.956614] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6d66a1c644fe699013f3d024b65f4dfa2c5f6bb2e344eef4ab51199503d6bb1f] <==
	{"level":"warn","ts":"2025-10-09T20:20:02.235584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.263923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.277952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.307215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.325402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.355519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.365513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.381357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.401240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.418634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.443512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.466983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.483938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.499351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.514502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.539838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.551996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.568061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.586113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.603809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.629191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.657391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.679408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.689668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:20:02.794493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49846","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:20:58 up  3:03,  0 user,  load average: 3.24, 2.93, 2.14
	Linux embed-certs-565110 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19c60abb724c168299c8076033b87385f420db683dd0f2474250da6b74aaf169] <==
	I1009 20:20:05.429732       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:20:05.429994       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1009 20:20:05.430121       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:20:05.430133       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:20:05.430145       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:20:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:20:05.701599       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:20:05.701694       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:20:05.701706       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:20:05.703147       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 20:20:35.701913       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1009 20:20:35.702902       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 20:20:35.703009       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1009 20:20:35.702914       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1009 20:20:37.302449       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 20:20:37.302501       1 metrics.go:72] Registering metrics
	I1009 20:20:37.302592       1 controller.go:711] "Syncing nftables rules"
	I1009 20:20:45.651694       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 20:20:45.651842       1 main.go:301] handling current node
	I1009 20:20:55.649286       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1009 20:20:55.649340       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1de1928d9c10a7383f82f9d07f373a124ba301e004ce8acd88dd8a940cd3c874] <==
	I1009 20:20:04.042642       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 20:20:04.080429       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 20:20:04.091759       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 20:20:04.091875       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1009 20:20:04.094468       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 20:20:04.101959       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1009 20:20:04.102153       1 aggregator.go:171] initial CRD sync complete...
	I1009 20:20:04.102174       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 20:20:04.102182       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 20:20:04.102189       1 cache.go:39] Caches are synced for autoregister controller
	I1009 20:20:04.104284       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 20:20:04.104319       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 20:20:04.118910       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1009 20:20:04.135816       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:20:04.530122       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:20:04.564858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 20:20:05.260733       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 20:20:05.436599       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 20:20:05.563887       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:20:05.686194       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:20:05.931405       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.85.27"}
	I1009 20:20:05.950344       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.155.67"}
	I1009 20:20:07.910966       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 20:20:08.361489       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 20:20:08.459373       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [263af593d94482c92965e6f0511548fd1ccf9f2292e732c23158498a550ac2a4] <==
	I1009 20:20:07.910155       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 20:20:07.916178       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1009 20:20:07.916538       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1009 20:20:07.916603       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 20:20:07.916648       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 20:20:07.916678       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 20:20:07.916720       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 20:20:07.917385       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 20:20:07.917466       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 20:20:07.918814       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1009 20:20:07.921262       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 20:20:07.927555       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:20:07.927616       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 20:20:07.927625       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 20:20:07.933707       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 20:20:07.940324       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1009 20:20:07.941587       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1009 20:20:07.952454       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:20:07.952554       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1009 20:20:07.953690       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 20:20:07.953763       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1009 20:20:07.953729       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 20:20:07.953748       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 20:20:07.960475       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 20:20:07.964763       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	
	
	==> kube-proxy [5bcf7f81c448e41e559806602e3f3a1d94582cbf78df0ab117caa5f14d6ba76a] <==
	I1009 20:20:05.895866       1 server_linux.go:53] "Using iptables proxy"
	I1009 20:20:06.067019       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 20:20:06.168141       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 20:20:06.168287       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1009 20:20:06.168451       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:20:06.205472       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:20:06.205594       1 server_linux.go:132] "Using iptables Proxier"
	I1009 20:20:06.212535       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:20:06.212977       1 server.go:527] "Version info" version="v1.34.1"
	I1009 20:20:06.213448       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:20:06.214872       1 config.go:200] "Starting service config controller"
	I1009 20:20:06.214939       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 20:20:06.214986       1 config.go:106] "Starting endpoint slice config controller"
	I1009 20:20:06.215012       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 20:20:06.215049       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 20:20:06.215076       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 20:20:06.215736       1 config.go:309] "Starting node config controller"
	I1009 20:20:06.219098       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 20:20:06.219230       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 20:20:06.315111       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1009 20:20:06.315116       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 20:20:06.315155       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e15b99435508a3068f9f9d4d692dd1bd7f56391601b5b0179b6642e79aa3078f] <==
	I1009 20:20:02.500099       1 serving.go:386] Generated self-signed cert in-memory
	I1009 20:20:06.249744       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 20:20:06.249784       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:20:06.256123       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 20:20:06.256172       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 20:20:06.256737       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:20:06.256759       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:20:06.256776       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:20:06.256876       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:20:06.257486       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 20:20:06.259049       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 20:20:06.356900       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:20:06.356981       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 20:20:06.357207       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 09 20:20:08 embed-certs-565110 kubelet[777]: I1009 20:20:08.687028     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbm8v\" (UniqueName: \"kubernetes.io/projected/def1cb05-75eb-47cd-8733-e75e6c64ee66-kube-api-access-tbm8v\") pod \"kubernetes-dashboard-855c9754f9-f7ckg\" (UID: \"def1cb05-75eb-47cd-8733-e75e6c64ee66\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f7ckg"
	Oct 09 20:20:08 embed-certs-565110 kubelet[777]: I1009 20:20:08.687047     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e156a081-45c8-46ba-b291-eb3db7e6a867-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-wvnph\" (UID: \"e156a081-45c8-46ba-b291-eb3db7e6a867\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph"
	Oct 09 20:20:08 embed-certs-565110 kubelet[777]: W1009 20:20:08.875887     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5db0c011c6081f65675c1c7e0e0cead1ee603fc85ef523d794ffef197f368e85/crio-2f659c12feea9f2751421d4bf9f866b8a4d322a26a32a78119f47d7e06ea1eec WatchSource:0}: Error finding container 2f659c12feea9f2751421d4bf9f866b8a4d322a26a32a78119f47d7e06ea1eec: Status 404 returned error can't find the container with id 2f659c12feea9f2751421d4bf9f866b8a4d322a26a32a78119f47d7e06ea1eec
	Oct 09 20:20:08 embed-certs-565110 kubelet[777]: I1009 20:20:08.950928     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 09 20:20:16 embed-certs-565110 kubelet[777]: I1009 20:20:16.929571     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f7ckg" podStartSLOduration=4.369183779 podStartE2EDuration="8.929550865s" podCreationTimestamp="2025-10-09 20:20:08 +0000 UTC" firstStartedPulling="2025-10-09 20:20:08.880957918 +0000 UTC m=+10.546764731" lastFinishedPulling="2025-10-09 20:20:13.441324996 +0000 UTC m=+15.107131817" observedRunningTime="2025-10-09 20:20:13.75551031 +0000 UTC m=+15.421317131" watchObservedRunningTime="2025-10-09 20:20:16.929550865 +0000 UTC m=+18.595357678"
	Oct 09 20:20:18 embed-certs-565110 kubelet[777]: I1009 20:20:18.758484     777 scope.go:117] "RemoveContainer" containerID="68fbac12b13e362fa52bbe7a7f1e90975d503cfe1178d1d230a132115e805b5f"
	Oct 09 20:20:19 embed-certs-565110 kubelet[777]: I1009 20:20:19.762345     777 scope.go:117] "RemoveContainer" containerID="68fbac12b13e362fa52bbe7a7f1e90975d503cfe1178d1d230a132115e805b5f"
	Oct 09 20:20:19 embed-certs-565110 kubelet[777]: I1009 20:20:19.762637     777 scope.go:117] "RemoveContainer" containerID="e8aee7e08506cd1a9ede8f256966cdb4f2af7592db4e2681c5a20e429af60fac"
	Oct 09 20:20:19 embed-certs-565110 kubelet[777]: E1009 20:20:19.762785     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wvnph_kubernetes-dashboard(e156a081-45c8-46ba-b291-eb3db7e6a867)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph" podUID="e156a081-45c8-46ba-b291-eb3db7e6a867"
	Oct 09 20:20:20 embed-certs-565110 kubelet[777]: I1009 20:20:20.766348     777 scope.go:117] "RemoveContainer" containerID="e8aee7e08506cd1a9ede8f256966cdb4f2af7592db4e2681c5a20e429af60fac"
	Oct 09 20:20:20 embed-certs-565110 kubelet[777]: E1009 20:20:20.766979     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wvnph_kubernetes-dashboard(e156a081-45c8-46ba-b291-eb3db7e6a867)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph" podUID="e156a081-45c8-46ba-b291-eb3db7e6a867"
	Oct 09 20:20:28 embed-certs-565110 kubelet[777]: I1009 20:20:28.895834     777 scope.go:117] "RemoveContainer" containerID="e8aee7e08506cd1a9ede8f256966cdb4f2af7592db4e2681c5a20e429af60fac"
	Oct 09 20:20:29 embed-certs-565110 kubelet[777]: I1009 20:20:29.790330     777 scope.go:117] "RemoveContainer" containerID="e8aee7e08506cd1a9ede8f256966cdb4f2af7592db4e2681c5a20e429af60fac"
	Oct 09 20:20:29 embed-certs-565110 kubelet[777]: I1009 20:20:29.790653     777 scope.go:117] "RemoveContainer" containerID="02a0a6e50a9df815b7a8e5622e75cf0fd7bb066c9f4c849fe03efa883d3c54e9"
	Oct 09 20:20:29 embed-certs-565110 kubelet[777]: E1009 20:20:29.790816     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wvnph_kubernetes-dashboard(e156a081-45c8-46ba-b291-eb3db7e6a867)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph" podUID="e156a081-45c8-46ba-b291-eb3db7e6a867"
	Oct 09 20:20:35 embed-certs-565110 kubelet[777]: I1009 20:20:35.807384     777 scope.go:117] "RemoveContainer" containerID="3717764bae0d2e9c480c451663d8436220a28e339f7ea5f728f760e6db2361d2"
	Oct 09 20:20:38 embed-certs-565110 kubelet[777]: I1009 20:20:38.895085     777 scope.go:117] "RemoveContainer" containerID="02a0a6e50a9df815b7a8e5622e75cf0fd7bb066c9f4c849fe03efa883d3c54e9"
	Oct 09 20:20:38 embed-certs-565110 kubelet[777]: E1009 20:20:38.895260     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wvnph_kubernetes-dashboard(e156a081-45c8-46ba-b291-eb3db7e6a867)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph" podUID="e156a081-45c8-46ba-b291-eb3db7e6a867"
	Oct 09 20:20:52 embed-certs-565110 kubelet[777]: I1009 20:20:52.560135     777 scope.go:117] "RemoveContainer" containerID="02a0a6e50a9df815b7a8e5622e75cf0fd7bb066c9f4c849fe03efa883d3c54e9"
	Oct 09 20:20:52 embed-certs-565110 kubelet[777]: I1009 20:20:52.854258     777 scope.go:117] "RemoveContainer" containerID="02a0a6e50a9df815b7a8e5622e75cf0fd7bb066c9f4c849fe03efa883d3c54e9"
	Oct 09 20:20:52 embed-certs-565110 kubelet[777]: I1009 20:20:52.854925     777 scope.go:117] "RemoveContainer" containerID="5a38898751c3190370fb093a21e09fb35270396f2335f7cf298f3ffeb676eab4"
	Oct 09 20:20:52 embed-certs-565110 kubelet[777]: E1009 20:20:52.855472     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wvnph_kubernetes-dashboard(e156a081-45c8-46ba-b291-eb3db7e6a867)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wvnph" podUID="e156a081-45c8-46ba-b291-eb3db7e6a867"
	Oct 09 20:20:53 embed-certs-565110 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 20:20:53 embed-certs-565110 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 20:20:53 embed-certs-565110 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [dcee20808a0ab7b88a286f7a9fa5402833491c3468c0d98c6e6e41a4d387aeca] <==
	2025/10/09 20:20:13 Starting overwatch
	2025/10/09 20:20:13 Using namespace: kubernetes-dashboard
	2025/10/09 20:20:13 Using in-cluster config to connect to apiserver
	2025/10/09 20:20:13 Using secret token for csrf signing
	2025/10/09 20:20:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/09 20:20:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/09 20:20:13 Successful initial request to the apiserver, version: v1.34.1
	2025/10/09 20:20:13 Generating JWE encryption key
	2025/10/09 20:20:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/09 20:20:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/09 20:20:14 Initializing JWE encryption key from synchronized object
	2025/10/09 20:20:14 Creating in-cluster Sidecar client
	2025/10/09 20:20:14 Serving insecurely on HTTP port: 9090
	2025/10/09 20:20:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 20:20:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [0f270941e80ca04066ea9be417daf5c6ce5c2ec0888d5bfff2efb8528aeb3c92] <==
	I1009 20:20:35.871509       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:20:35.883797       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:20:35.884015       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 20:20:35.886498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:39.341394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:43.601678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:47.200520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:50.254789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:53.277382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:53.283661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:20:53.283805       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:20:53.283969       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-565110_e43d944c-3ef4-4e52-90da-927b254e84de!
	I1009 20:20:53.284891       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"363137e6-edc1-40e3-81f2-14e316bf471f", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-565110_e43d944c-3ef4-4e52-90da-927b254e84de became leader
	W1009 20:20:53.298056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:53.301962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:20:53.384144       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-565110_e43d944c-3ef4-4e52-90da-927b254e84de!
	W1009 20:20:55.304434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:55.308876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:57.312677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:20:57.319877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [3717764bae0d2e9c480c451663d8436220a28e339f7ea5f728f760e6db2361d2] <==
	I1009 20:20:05.525842       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 20:20:35.669788       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-565110 -n embed-certs-565110
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-565110 -n embed-certs-565110: exit status 2 (493.215957ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-565110 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-160257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-160257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (302.15088ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:21:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-160257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-160257
helpers_test.go:243: (dbg) docker inspect newest-cni-160257:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7",
	        "Created": "2025-10-09T20:21:08.350011602Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501999,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:21:08.420229645Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7/hosts",
	        "LogPath": "/var/lib/docker/containers/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7-json.log",
	        "Name": "/newest-cni-160257",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-160257:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-160257",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7",
	                "LowerDir": "/var/lib/docker/overlay2/c3f22559d24a79b75dffe8207445a15a01a15487326878a474489ec60730e13e-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c3f22559d24a79b75dffe8207445a15a01a15487326878a474489ec60730e13e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c3f22559d24a79b75dffe8207445a15a01a15487326878a474489ec60730e13e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c3f22559d24a79b75dffe8207445a15a01a15487326878a474489ec60730e13e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-160257",
	                "Source": "/var/lib/docker/volumes/newest-cni-160257/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-160257",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-160257",
	                "name.minikube.sigs.k8s.io": "newest-cni-160257",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "66f85afac0863a5fb45167c6ab05644695753b9aac69fd444f60d50e20faaba2",
	            "SandboxKey": "/var/run/docker/netns/66f85afac086",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-160257": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:d0:fd:05:37:ff",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "870f78d9db06c84c4e340afedcf88d286d22b52c6864f8eefaae6f4f49447e55",
	                    "EndpointID": "02eb175e7bfb61974a01b0f2f024f890e63f6d0dce7beef4067b943cf6b3a808",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-160257",
	                        "b09c68fc79ea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-160257 -n newest-cni-160257
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-160257 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-160257 logs -n 25: (1.153586231s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-020313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ start   │ -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:18 UTC │
	│ delete  │ -p old-k8s-version-670649                                                                                                                                                                                                                     │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:17 UTC │ 09 Oct 25 20:17 UTC │
	│ delete  │ -p old-k8s-version-670649                                                                                                                                                                                                                     │ old-k8s-version-670649       │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:18 UTC │
	│ start   │ -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:19 UTC │
	│ image   │ no-preload-020313 image list --format=json                                                                                                                                                                                                    │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:18 UTC │
	│ pause   │ -p no-preload-020313 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │                     │
	│ delete  │ -p no-preload-020313                                                                                                                                                                                                                          │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ delete  │ -p no-preload-020313                                                                                                                                                                                                                          │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ delete  │ -p disable-driver-mounts-613966                                                                                                                                                                                                               │ disable-driver-mounts-613966 │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ start   │ -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-565110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │                     │
	│ stop    │ -p embed-certs-565110 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-565110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ start   │ -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-417984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-417984 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:20 UTC │
	│ image   │ embed-certs-565110 image list --format=json                                                                                                                                                                                                   │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:20 UTC │
	│ pause   │ -p embed-certs-565110 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-417984 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:20 UTC │
	│ delete  │ -p embed-certs-565110                                                                                                                                                                                                                         │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:21 UTC │
	│ start   │ -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	│ delete  │ -p embed-certs-565110                                                                                                                                                                                                                         │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ start   │ -p newest-cni-160257 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-160257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:21:02
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:21:02.407602  501106 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:21:02.407727  501106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:21:02.407736  501106 out.go:374] Setting ErrFile to fd 2...
	I1009 20:21:02.407743  501106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:21:02.408000  501106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:21:02.408432  501106 out.go:368] Setting JSON to false
	I1009 20:21:02.409367  501106 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11002,"bootTime":1760030261,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:21:02.409446  501106 start.go:143] virtualization:  
	I1009 20:21:02.412953  501106 out.go:179] * [newest-cni-160257] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:21:02.416044  501106 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:21:02.416109  501106 notify.go:221] Checking for updates...
	I1009 20:21:02.422588  501106 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:21:02.425584  501106 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:21:02.428688  501106 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:21:02.431607  501106 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:21:02.434602  501106 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:21:02.438068  501106 config.go:182] Loaded profile config "default-k8s-diff-port-417984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:21:02.438244  501106 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:21:02.469235  501106 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:21:02.469376  501106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:21:02.536376  501106 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 20:21:02.526026302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:21:02.536516  501106 docker.go:319] overlay module found
	I1009 20:21:02.539915  501106 out.go:179] * Using the docker driver based on user configuration
	I1009 20:21:02.542904  501106 start.go:309] selected driver: docker
	I1009 20:21:02.542939  501106 start.go:930] validating driver "docker" against <nil>
	I1009 20:21:02.542956  501106 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:21:02.543749  501106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:21:02.599656  501106 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-09 20:21:02.589808974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:21:02.599822  501106 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	W1009 20:21:02.599849  501106 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1009 20:21:02.600093  501106 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 20:21:02.603165  501106 out.go:179] * Using Docker driver with root privileges
	I1009 20:21:02.606177  501106 cni.go:84] Creating CNI manager for ""
	I1009 20:21:02.606263  501106 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:21:02.606276  501106 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 20:21:02.606354  501106 start.go:353] cluster config:
	{Name:newest-cni-160257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:21:02.609633  501106 out.go:179] * Starting "newest-cni-160257" primary control-plane node in "newest-cni-160257" cluster
	I1009 20:21:02.612659  501106 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:21:02.615652  501106 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:21:02.618774  501106 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:21:02.618844  501106 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 20:21:02.618860  501106 cache.go:58] Caching tarball of preloaded images
	I1009 20:21:02.618880  501106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:21:02.618984  501106 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 20:21:02.618998  501106 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 20:21:02.619124  501106 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/config.json ...
	I1009 20:21:02.619145  501106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/config.json: {Name:mkc64b5704726ff4a1e7af87ac8e3310149dceea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:02.640455  501106 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:21:02.640475  501106 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:21:02.640489  501106 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:21:02.640512  501106 start.go:361] acquireMachinesLock for newest-cni-160257: {Name:mkab4aa92a505aec53d4bce517e62dd4f38ff19e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:21:02.640614  501106 start.go:365] duration metric: took 86.631µs to acquireMachinesLock for "newest-cni-160257"
	I1009 20:21:02.640639  501106 start.go:94] Provisioning new machine with config: &{Name:newest-cni-160257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:21:02.640721  501106 start.go:126] createHost starting for "" (driver="docker")
	I1009 20:20:59.963058  500265 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-417984" ...
	I1009 20:20:59.963157  500265 cli_runner.go:164] Run: docker start default-k8s-diff-port-417984
	I1009 20:21:00.544617  500265 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417984 --format={{.State.Status}}
	I1009 20:21:00.570324  500265 kic.go:430] container "default-k8s-diff-port-417984" state is running.
	I1009 20:21:00.571010  500265 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-417984
	I1009 20:21:00.598928  500265 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/config.json ...
	I1009 20:21:00.599301  500265 machine.go:93] provisionDockerMachine start ...
	I1009 20:21:00.599372  500265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:21:00.627634  500265 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:00.627982  500265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1009 20:21:00.627991  500265 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:21:00.628881  500265 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 20:21:03.789386  500265 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-417984
	
	I1009 20:21:03.789492  500265 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-417984"
	I1009 20:21:03.789616  500265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:21:03.814784  500265 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:03.815096  500265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1009 20:21:03.815109  500265 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-417984 && echo "default-k8s-diff-port-417984" | sudo tee /etc/hostname
	I1009 20:21:03.985724  500265 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-417984
	
	I1009 20:21:03.985822  500265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:21:04.011480  500265 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:04.011803  500265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1009 20:21:04.011822  500265 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-417984' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-417984/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-417984' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:21:04.173657  500265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:21:04.173685  500265 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:21:04.173706  500265 ubuntu.go:190] setting up certificates
	I1009 20:21:04.173715  500265 provision.go:84] configureAuth start
	I1009 20:21:04.173791  500265 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-417984
	I1009 20:21:04.196958  500265 provision.go:143] copyHostCerts
	I1009 20:21:04.197020  500265 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:21:04.197039  500265 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:21:04.197122  500265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:21:04.197307  500265 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:21:04.197316  500265 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:21:04.197349  500265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:21:04.197412  500265 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:21:04.197417  500265 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:21:04.197442  500265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:21:04.197507  500265 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-417984 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-417984 localhost minikube]
	I1009 20:21:02.644263  501106 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 20:21:02.644497  501106 start.go:160] libmachine.API.Create for "newest-cni-160257" (driver="docker")
	I1009 20:21:02.644546  501106 client.go:168] LocalClient.Create starting
	I1009 20:21:02.644621  501106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem
	I1009 20:21:02.644659  501106 main.go:141] libmachine: Decoding PEM data...
	I1009 20:21:02.644672  501106 main.go:141] libmachine: Parsing certificate...
	I1009 20:21:02.644723  501106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem
	I1009 20:21:02.644745  501106 main.go:141] libmachine: Decoding PEM data...
	I1009 20:21:02.644755  501106 main.go:141] libmachine: Parsing certificate...
	I1009 20:21:02.645161  501106 cli_runner.go:164] Run: docker network inspect newest-cni-160257 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 20:21:02.661962  501106 cli_runner.go:211] docker network inspect newest-cni-160257 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 20:21:02.662048  501106 network_create.go:284] running [docker network inspect newest-cni-160257] to gather additional debugging logs...
	I1009 20:21:02.662075  501106 cli_runner.go:164] Run: docker network inspect newest-cni-160257
	W1009 20:21:02.677960  501106 cli_runner.go:211] docker network inspect newest-cni-160257 returned with exit code 1
	I1009 20:21:02.678002  501106 network_create.go:287] error running [docker network inspect newest-cni-160257]: docker network inspect newest-cni-160257: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-160257 not found
	I1009 20:21:02.678018  501106 network_create.go:289] output of [docker network inspect newest-cni-160257]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-160257 not found
	
	** /stderr **
	I1009 20:21:02.678146  501106 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:21:02.713194  501106 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3847a6577684 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:b5:e6:7d:c7:ad} reservation:<nil>}
	I1009 20:21:02.713584  501106 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5742e12e0dad IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9e:82:91:fd:a6:fb} reservation:<nil>}
	I1009 20:21:02.713865  501106 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-11b099636187 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:bb:e5:1b:6d:a2} reservation:<nil>}
	I1009 20:21:02.714283  501106 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a42cc0}
	I1009 20:21:02.714307  501106 network_create.go:124] attempt to create docker network newest-cni-160257 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1009 20:21:02.714368  501106 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-160257 newest-cni-160257
	I1009 20:21:02.774189  501106 network_create.go:108] docker network newest-cni-160257 192.168.76.0/24 created
	I1009 20:21:02.774223  501106 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-160257" container
	I1009 20:21:02.774324  501106 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 20:21:02.790982  501106 cli_runner.go:164] Run: docker volume create newest-cni-160257 --label name.minikube.sigs.k8s.io=newest-cni-160257 --label created_by.minikube.sigs.k8s.io=true
	I1009 20:21:02.810248  501106 oci.go:103] Successfully created a docker volume newest-cni-160257
	I1009 20:21:02.810337  501106 cli_runner.go:164] Run: docker run --rm --name newest-cni-160257-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-160257 --entrypoint /usr/bin/test -v newest-cni-160257:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 20:21:03.323270  501106 oci.go:107] Successfully prepared a docker volume newest-cni-160257
	I1009 20:21:03.323328  501106 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:21:03.323348  501106 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 20:21:03.323419  501106 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-160257:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 20:21:05.522904  500265 provision.go:177] copyRemoteCerts
	I1009 20:21:05.523034  500265 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:21:05.523123  500265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:21:05.541517  500265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:21:05.666130  500265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:21:05.689025  500265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:21:05.709375  500265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1009 20:21:05.730526  500265 provision.go:87] duration metric: took 1.556786284s to configureAuth
	I1009 20:21:05.730601  500265 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:21:05.730840  500265 config.go:182] Loaded profile config "default-k8s-diff-port-417984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:21:05.730995  500265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:21:05.753608  500265 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:05.753915  500265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1009 20:21:05.753931  500265 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:21:06.183276  500265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:21:06.183298  500265 machine.go:96] duration metric: took 5.583985941s to provisionDockerMachine
	I1009 20:21:06.183309  500265 start.go:294] postStartSetup for "default-k8s-diff-port-417984" (driver="docker")
	I1009 20:21:06.183320  500265 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:21:06.183381  500265 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:21:06.183423  500265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:21:06.218822  500265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:21:06.339123  500265 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:21:06.346527  500265 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:21:06.346600  500265 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:21:06.346615  500265 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:21:06.346675  500265 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:21:06.346761  500265 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:21:06.346871  500265 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:21:06.355637  500265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:21:06.382204  500265 start.go:297] duration metric: took 198.878899ms for postStartSetup
	I1009 20:21:06.382287  500265 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:21:06.382333  500265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:21:06.408939  500265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:21:06.542884  500265 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:21:06.548053  500265 fix.go:57] duration metric: took 6.615409923s for fixHost
	I1009 20:21:06.548119  500265 start.go:84] releasing machines lock for "default-k8s-diff-port-417984", held for 6.615503184s
	I1009 20:21:06.548224  500265 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-417984
	I1009 20:21:06.569974  500265 ssh_runner.go:195] Run: cat /version.json
	I1009 20:21:06.569999  500265 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:21:06.570031  500265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:21:06.570058  500265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:21:06.591152  500265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:21:06.603801  500265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:21:06.829681  500265 ssh_runner.go:195] Run: systemctl --version
	I1009 20:21:06.837040  500265 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:21:06.879833  500265 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:21:06.884483  500265 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:21:06.884558  500265 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:21:06.894360  500265 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 20:21:06.894383  500265 start.go:496] detecting cgroup driver to use...
	I1009 20:21:06.894417  500265 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 20:21:06.894465  500265 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:21:06.910323  500265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:21:06.924751  500265 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:21:06.924820  500265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:21:06.941943  500265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:21:06.956498  500265 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:21:07.092302  500265 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:21:07.222630  500265 docker.go:234] disabling docker service ...
	I1009 20:21:07.222711  500265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:21:07.238934  500265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:21:07.253060  500265 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:21:07.380772  500265 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:21:07.507632  500265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:21:07.520757  500265 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:21:07.537063  500265 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:21:07.537161  500265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:07.550580  500265 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:21:07.550653  500265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:07.561024  500265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:07.572604  500265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:07.583503  500265 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:21:07.592549  500265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:07.602625  500265 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:07.611806  500265 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:07.621040  500265 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:21:07.639075  500265 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:21:07.647377  500265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:07.773875  500265 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:21:08.360585  500265 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:21:08.360707  500265 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:21:08.370911  500265 start.go:564] Will wait 60s for crictl version
	I1009 20:21:08.371007  500265 ssh_runner.go:195] Run: which crictl
	I1009 20:21:08.375592  500265 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:21:08.418826  500265 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:21:08.418951  500265 ssh_runner.go:195] Run: crio --version
	I1009 20:21:08.465944  500265 ssh_runner.go:195] Run: crio --version
	I1009 20:21:08.523080  500265 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 20:21:08.526198  500265 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-417984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:21:08.561078  500265 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 20:21:08.565296  500265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:21:08.575995  500265 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-417984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417984 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:21:08.576119  500265 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:21:08.576179  500265 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:21:08.621724  500265 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:21:08.621749  500265 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:21:08.621807  500265 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:21:08.657889  500265 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:21:08.657912  500265 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:21:08.657920  500265 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1009 20:21:08.658022  500265 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-417984 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:21:08.658102  500265 ssh_runner.go:195] Run: crio config
	I1009 20:21:08.775595  500265 cni.go:84] Creating CNI manager for ""
	I1009 20:21:08.775622  500265 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:21:08.775640  500265 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 20:21:08.775665  500265 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-417984 NodeName:default-k8s-diff-port-417984 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:21:08.775797  500265 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-417984"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:21:08.775869  500265 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:21:08.795321  500265 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:21:08.795500  500265 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:21:08.810721  500265 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1009 20:21:08.832519  500265 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:21:08.858643  500265 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1009 20:21:08.884389  500265 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:21:08.891388  500265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:21:08.911942  500265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:09.177067  500265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:21:09.221665  500265 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984 for IP: 192.168.85.2
	I1009 20:21:09.221683  500265 certs.go:195] generating shared ca certs ...
	I1009 20:21:09.221699  500265 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:09.221844  500265 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:21:09.221885  500265 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:21:09.221892  500265 certs.go:257] generating profile certs ...
	I1009 20:21:09.221972  500265 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/client.key
	I1009 20:21:09.222035  500265 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/apiserver.key.0bef80d8
	I1009 20:21:09.222071  500265 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/proxy-client.key
	I1009 20:21:09.222174  500265 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:21:09.222204  500265 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:21:09.222212  500265 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:21:09.222235  500265 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:21:09.222258  500265 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:21:09.222281  500265 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:21:09.222321  500265 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:21:09.222906  500265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:21:09.308247  500265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:21:09.353957  500265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:21:09.408081  500265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:21:09.510219  500265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 20:21:09.655843  500265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:21:09.731030  500265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:21:09.761195  500265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:21:09.794220  500265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:21:09.840455  500265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:21:09.878321  500265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:21:09.977696  500265 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:21:10.029714  500265 ssh_runner.go:195] Run: openssl version
	I1009 20:21:10.047277  500265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:21:10.081407  500265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:21:10.101204  500265 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:21:10.101281  500265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:21:10.268438  500265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:21:10.286474  500265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:21:10.309923  500265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:21:10.318182  500265 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:21:10.318256  500265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:21:10.476670  500265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:21:10.498435  500265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:21:10.520493  500265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:21:10.531177  500265 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:21:10.531264  500265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:21:10.759733  500265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:21:10.824180  500265 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:21:10.836799  500265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:21:10.942001  500265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:21:11.055537  500265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:21:11.145884  500265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:21:11.247607  500265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:21:11.334014  500265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:21:11.426408  500265 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-417984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417984 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:21:11.426496  500265 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:21:11.426570  500265 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:21:11.519380  500265 cri.go:89] found id: "4eeb90a44de65c7aa6b10b300aa161b1c37aa94a4e93eadfd6975cbb0428c677"
	I1009 20:21:11.519403  500265 cri.go:89] found id: "bef0f8b493af26a97c449506b2fb953144bf49745a3a417030e064059e7b187a"
	I1009 20:21:11.519409  500265 cri.go:89] found id: "c867b182d54580a31fb8f6e96300d3d3a7d7beacfb0c84d96100f68f251ea0f6"
	I1009 20:21:11.519421  500265 cri.go:89] found id: "a5832f172fdf43a40fddfb19a9cd192309bb7216cfb2d490b21e4a51b24a923e"
	I1009 20:21:11.519425  500265 cri.go:89] found id: ""
	I1009 20:21:11.519482  500265 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 20:21:11.560836  500265 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:21:11Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:21:11.560917  500265 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:21:11.576935  500265 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 20:21:11.576955  500265 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 20:21:11.577012  500265 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:21:11.587378  500265 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:21:11.587756  500265 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-417984" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:21:11.587873  500265 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-294150/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-417984" cluster setting kubeconfig missing "default-k8s-diff-port-417984" context setting]
	I1009 20:21:11.588152  500265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:11.589812  500265 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:21:11.598110  500265 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1009 20:21:11.598140  500265 kubeadm.go:601] duration metric: took 21.179578ms to restartPrimaryControlPlane
	I1009 20:21:11.598149  500265 kubeadm.go:402] duration metric: took 171.7513ms to StartCluster
	I1009 20:21:11.598164  500265 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:11.598219  500265 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:21:11.598814  500265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:11.599020  500265 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:21:11.599365  500265 config.go:182] Loaded profile config "default-k8s-diff-port-417984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:21:11.599401  500265 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:21:11.599461  500265 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-417984"
	I1009 20:21:11.599486  500265 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-417984"
	W1009 20:21:11.599492  500265 addons.go:247] addon storage-provisioner should already be in state true
	I1009 20:21:11.599512  500265 host.go:66] Checking if "default-k8s-diff-port-417984" exists ...
	I1009 20:21:11.600040  500265 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417984 --format={{.State.Status}}
	I1009 20:21:11.600383  500265 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-417984"
	I1009 20:21:11.600411  500265 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-417984"
	W1009 20:21:11.600419  500265 addons.go:247] addon dashboard should already be in state true
	I1009 20:21:11.600450  500265 host.go:66] Checking if "default-k8s-diff-port-417984" exists ...
	I1009 20:21:11.600888  500265 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417984 --format={{.State.Status}}
	I1009 20:21:11.613374  500265 out.go:179] * Verifying Kubernetes components...
	I1009 20:21:11.613583  500265 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-417984"
	I1009 20:21:11.613617  500265 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-417984"
	I1009 20:21:11.613963  500265 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417984 --format={{.State.Status}}
	I1009 20:21:11.621324  500265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:11.665273  500265 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:21:11.668767  500265 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 20:21:11.668923  500265 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:11.668938  500265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:21:11.669003  500265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:21:11.675102  500265 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 20:21:08.232400  501106 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-160257:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.908919737s)
	I1009 20:21:08.232430  501106 kic.go:203] duration metric: took 4.909078689s to extract preloaded images to volume ...
	W1009 20:21:08.232574  501106 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 20:21:08.232681  501106 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 20:21:08.328280  501106 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-160257 --name newest-cni-160257 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-160257 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-160257 --network newest-cni-160257 --ip 192.168.76.2 --volume newest-cni-160257:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 20:21:08.694684  501106 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Running}}
	I1009 20:21:08.726484  501106 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:08.748774  501106 cli_runner.go:164] Run: docker exec newest-cni-160257 stat /var/lib/dpkg/alternatives/iptables
	I1009 20:21:08.834400  501106 oci.go:144] the created container "newest-cni-160257" has a running status.
	I1009 20:21:08.834434  501106 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa...
	I1009 20:21:10.045851  501106 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 20:21:10.081858  501106 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:10.110591  501106 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 20:21:10.110612  501106 kic_runner.go:114] Args: [docker exec --privileged newest-cni-160257 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 20:21:10.188257  501106 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:10.219963  501106 machine.go:93] provisionDockerMachine start ...
	I1009 20:21:10.220067  501106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:10.255361  501106 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:10.255705  501106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1009 20:21:10.255722  501106 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:21:10.538670  501106 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-160257
	
	I1009 20:21:10.538692  501106 ubuntu.go:182] provisioning hostname "newest-cni-160257"
	I1009 20:21:10.538757  501106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:10.565309  501106 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:10.565642  501106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1009 20:21:10.565655  501106 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-160257 && echo "newest-cni-160257" | sudo tee /etc/hostname
	I1009 20:21:10.790810  501106 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-160257
	
	I1009 20:21:10.790886  501106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:10.811832  501106 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:10.812134  501106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1009 20:21:10.812153  501106 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-160257' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-160257/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-160257' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:21:11.030274  501106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:21:11.030355  501106 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:21:11.030404  501106 ubuntu.go:190] setting up certificates
	I1009 20:21:11.030433  501106 provision.go:84] configureAuth start
	I1009 20:21:11.030528  501106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-160257
	I1009 20:21:11.055296  501106 provision.go:143] copyHostCerts
	I1009 20:21:11.055363  501106 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:21:11.055372  501106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:21:11.055431  501106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:21:11.055544  501106 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:21:11.055549  501106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:21:11.055575  501106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:21:11.055650  501106 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:21:11.055655  501106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:21:11.055681  501106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:21:11.055736  501106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.newest-cni-160257 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-160257]
	I1009 20:21:11.359344  501106 provision.go:177] copyRemoteCerts
	I1009 20:21:11.359408  501106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:21:11.359447  501106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:11.383185  501106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:11.491517  501106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:21:11.523385  501106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:21:11.547295  501106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:21:11.574535  501106 provision.go:87] duration metric: took 544.064724ms to configureAuth
	I1009 20:21:11.574567  501106 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:21:11.574762  501106 config.go:182] Loaded profile config "newest-cni-160257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:21:11.574880  501106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:11.644092  501106 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:11.644408  501106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1009 20:21:11.644429  501106 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:21:12.077968  501106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:21:12.077992  501106 machine.go:96] duration metric: took 1.858009325s to provisionDockerMachine
	I1009 20:21:12.078002  501106 client.go:171] duration metric: took 9.433449568s to LocalClient.Create
	I1009 20:21:12.078035  501106 start.go:168] duration metric: took 9.433538029s to libmachine.API.Create "newest-cni-160257"
	I1009 20:21:12.078047  501106 start.go:294] postStartSetup for "newest-cni-160257" (driver="docker")
	I1009 20:21:12.078057  501106 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:21:12.078147  501106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:21:12.078221  501106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:12.105065  501106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:12.220530  501106 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:21:12.224345  501106 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:21:12.224371  501106 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:21:12.224383  501106 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:21:12.224440  501106 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:21:12.224525  501106 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:21:12.224629  501106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:21:12.238566  501106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:21:12.272068  501106 start.go:297] duration metric: took 194.005115ms for postStartSetup
	I1009 20:21:12.272570  501106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-160257
	I1009 20:21:12.301869  501106 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/config.json ...
	I1009 20:21:12.302163  501106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:21:12.302210  501106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:12.325297  501106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:11.679046  500265 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 20:21:11.679075  500265 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 20:21:11.679151  500265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:21:11.690108  500265 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-417984"
	W1009 20:21:11.690131  500265 addons.go:247] addon default-storageclass should already be in state true
	I1009 20:21:11.690155  500265 host.go:66] Checking if "default-k8s-diff-port-417984" exists ...
	I1009 20:21:11.690591  500265 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417984 --format={{.State.Status}}
	I1009 20:21:11.736533  500265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:21:11.761288  500265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:21:11.764374  500265 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:11.764395  500265 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:21:11.764459  500265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:21:11.789457  500265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:21:12.071357  500265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:21:12.101868  500265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:12.143267  500265 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-417984" to be "Ready" ...
	I1009 20:21:12.158701  500265 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 20:21:12.158724  500265 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 20:21:12.174533  500265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:12.252730  500265 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 20:21:12.252752  500265 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 20:21:12.335740  500265 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 20:21:12.335761  500265 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 20:21:12.407696  500265 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 20:21:12.407726  500265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 20:21:12.511730  500265 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 20:21:12.511759  500265 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 20:21:12.619144  500265 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 20:21:12.619165  500265 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 20:21:12.637609  500265 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 20:21:12.637630  500265 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 20:21:12.652008  500265 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 20:21:12.652028  500265 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 20:21:12.676561  500265 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 20:21:12.676581  500265 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 20:21:12.691406  500265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 20:21:12.445992  501106 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:21:12.451690  501106 start.go:129] duration metric: took 9.810953745s to createHost
	I1009 20:21:12.451715  501106 start.go:84] releasing machines lock for "newest-cni-160257", held for 9.811092322s
	I1009 20:21:12.451785  501106 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-160257
	I1009 20:21:12.477428  501106 ssh_runner.go:195] Run: cat /version.json
	I1009 20:21:12.477498  501106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:12.477749  501106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:21:12.477816  501106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:12.522592  501106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:12.527598  501106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:12.657007  501106 ssh_runner.go:195] Run: systemctl --version
	I1009 20:21:12.791751  501106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:21:12.856530  501106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:21:12.865201  501106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:21:12.865302  501106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:21:12.908032  501106 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1009 20:21:12.908058  501106 start.go:496] detecting cgroup driver to use...
	I1009 20:21:12.908123  501106 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 20:21:12.908195  501106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:21:12.932945  501106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:21:12.951665  501106 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:21:12.951758  501106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:21:12.970381  501106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:21:12.990060  501106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:21:13.169273  501106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:21:13.369525  501106 docker.go:234] disabling docker service ...
	I1009 20:21:13.369633  501106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:21:13.413614  501106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:21:13.439400  501106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:21:13.660614  501106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:21:13.900800  501106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:21:13.925659  501106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:21:13.959307  501106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:21:13.959400  501106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:13.975660  501106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:21:13.975746  501106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:13.993900  501106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:14.006890  501106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:14.019280  501106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:21:14.035183  501106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:14.045912  501106 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:14.066604  501106 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:14.081086  501106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:21:14.090113  501106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:21:14.106688  501106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:14.300925  501106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:21:14.552891  501106 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:21:14.553018  501106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:21:14.557288  501106 start.go:564] Will wait 60s for crictl version
	I1009 20:21:14.557383  501106 ssh_runner.go:195] Run: which crictl
	I1009 20:21:14.566343  501106 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:21:14.607049  501106 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:21:14.607164  501106 ssh_runner.go:195] Run: crio --version
	I1009 20:21:14.662858  501106 ssh_runner.go:195] Run: crio --version
	I1009 20:21:14.718274  501106 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 20:21:14.721206  501106 cli_runner.go:164] Run: docker network inspect newest-cni-160257 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:21:14.748839  501106 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 20:21:14.752660  501106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:21:14.778473  501106 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1009 20:21:14.781322  501106 kubeadm.go:883] updating cluster {Name:newest-cni-160257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:21:14.781477  501106 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:21:14.781575  501106 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:21:14.845369  501106 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:21:14.845389  501106 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:21:14.845445  501106 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:21:14.884435  501106 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:21:14.884455  501106 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:21:14.884465  501106 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 20:21:14.884551  501106 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-160257 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:21:14.884635  501106 ssh_runner.go:195] Run: crio config
	I1009 20:21:14.994273  501106 cni.go:84] Creating CNI manager for ""
	I1009 20:21:14.994339  501106 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:21:14.994368  501106 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1009 20:21:14.994428  501106 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-160257 NodeName:newest-cni-160257 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:21:14.994601  501106 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-160257"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:21:14.994715  501106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:21:15.005099  501106 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:21:15.005268  501106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:21:15.032501  501106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 20:21:15.075121  501106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:21:15.097780  501106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1009 20:21:15.122818  501106 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:21:15.127545  501106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:21:15.146098  501106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:15.357569  501106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:21:15.405589  501106 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257 for IP: 192.168.76.2
	I1009 20:21:15.405612  501106 certs.go:195] generating shared ca certs ...
	I1009 20:21:15.405629  501106 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:15.405769  501106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:21:15.405815  501106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:21:15.405827  501106 certs.go:257] generating profile certs ...
	I1009 20:21:15.405884  501106 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/client.key
	I1009 20:21:15.405901  501106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/client.crt with IP's: []
	I1009 20:21:16.146890  501106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/client.crt ...
	I1009 20:21:16.146921  501106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/client.crt: {Name:mk99dc7c02aa61a5fecb1ffc4d591b2da4579d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:16.147138  501106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/client.key ...
	I1009 20:21:16.147153  501106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/client.key: {Name:mk3a7e4732e37c1e033a8f6324602464d159dd07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:16.147264  501106 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.key.f76169c2
	I1009 20:21:16.147284  501106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.crt.f76169c2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1009 20:21:16.901875  501106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.crt.f76169c2 ...
	I1009 20:21:16.901906  501106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.crt.f76169c2: {Name:mk36d566289bfeee9ad0d3e54f360ef740302a85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:16.902076  501106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.key.f76169c2 ...
	I1009 20:21:16.902092  501106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.key.f76169c2: {Name:mk6436296e20f95ba04b3016334fd57e9d2bc759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:16.902174  501106 certs.go:382] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.crt.f76169c2 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.crt
	I1009 20:21:16.902259  501106 certs.go:386] copying /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.key.f76169c2 -> /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.key
	I1009 20:21:16.902321  501106 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/proxy-client.key
	I1009 20:21:16.902341  501106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/proxy-client.crt with IP's: []
	I1009 20:21:17.146052  501106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/proxy-client.crt ...
	I1009 20:21:17.146081  501106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/proxy-client.crt: {Name:mk0d437a027e20545d1731c29c66f8a15b7499ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:17.146240  501106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/proxy-client.key ...
	I1009 20:21:17.146256  501106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/proxy-client.key: {Name:mk2402728b595e63feb4a48f1de37b3c45c6fa2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:17.146452  501106 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:21:17.146496  501106 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:21:17.146511  501106 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:21:17.146544  501106 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:21:17.146579  501106 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:21:17.146605  501106 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:21:17.146650  501106 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:21:17.147193  501106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:21:17.166215  501106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:21:17.184277  501106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:21:17.202184  501106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:21:17.231091  501106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 20:21:17.258498  501106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:21:17.284391  501106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:21:17.320272  501106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:21:17.342324  501106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:21:17.360555  501106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:21:17.380996  501106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:21:17.411024  501106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:21:17.441639  501106 ssh_runner.go:195] Run: openssl version
	I1009 20:21:17.451561  501106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:21:17.460746  501106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:21:17.464964  501106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:21:17.465030  501106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:21:17.513495  501106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:21:17.522064  501106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:21:17.538993  501106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:21:17.543717  501106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:21:17.543796  501106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:21:17.590019  501106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:21:17.608085  501106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:21:17.620880  501106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:21:17.625531  501106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:21:17.625609  501106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:21:17.674065  501106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:21:17.682717  501106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:21:17.687440  501106 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 20:21:17.687504  501106 kubeadm.go:400] StartCluster: {Name:newest-cni-160257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:21:17.687587  501106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:21:17.687665  501106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:21:17.722209  501106 cri.go:89] found id: ""
	I1009 20:21:17.722295  501106 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:21:17.732760  501106 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:21:17.742106  501106 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 20:21:17.742186  501106 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:21:17.753289  501106 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:21:17.753309  501106 kubeadm.go:157] found existing configuration files:
	
	I1009 20:21:17.753375  501106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:21:17.762278  501106 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:21:17.762356  501106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:21:17.770095  501106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:21:17.778967  501106 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:21:17.779046  501106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:21:17.787218  501106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:21:17.796045  501106 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:21:17.796125  501106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:21:17.804092  501106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:21:17.813628  501106 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:21:17.813704  501106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:21:17.822115  501106 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 20:21:17.873880  501106 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 20:21:17.874307  501106 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 20:21:17.912292  501106 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 20:21:17.912411  501106 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1009 20:21:17.912462  501106 kubeadm.go:318] OS: Linux
	I1009 20:21:17.912535  501106 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 20:21:17.912605  501106 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1009 20:21:17.912670  501106 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 20:21:17.912732  501106 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 20:21:17.912792  501106 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 20:21:17.912852  501106 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 20:21:17.912907  501106 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 20:21:17.912965  501106 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 20:21:17.913021  501106 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1009 20:21:18.115775  501106 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:21:18.115900  501106 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:21:18.116012  501106 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:21:18.133587  501106 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:21:19.083213  500265 node_ready.go:49] node "default-k8s-diff-port-417984" is "Ready"
	I1009 20:21:19.083245  500265 node_ready.go:38] duration metric: took 6.939950478s for node "default-k8s-diff-port-417984" to be "Ready" ...
	I1009 20:21:19.083260  500265 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:21:19.083322  500265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:22.262508  500265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.1606082s)
	I1009 20:21:22.262553  500265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.088002969s)
	I1009 20:21:22.262794  500265 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.571264646s)
	I1009 20:21:22.263031  500265 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.179691026s)
	I1009 20:21:22.263051  500265 api_server.go:72] duration metric: took 10.664009551s to wait for apiserver process to appear ...
	I1009 20:21:22.263057  500265 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:21:22.263074  500265 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1009 20:21:22.266053  500265 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-417984 addons enable metrics-server
	
	I1009 20:21:22.281310  500265 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1009 20:21:22.282651  500265 api_server.go:141] control plane version: v1.34.1
	I1009 20:21:22.282726  500265 api_server.go:131] duration metric: took 19.661026ms to wait for apiserver health ...
	I1009 20:21:22.282750  500265 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:21:22.294238  500265 system_pods.go:59] 8 kube-system pods found
	I1009 20:21:22.294277  500265 system_pods.go:61] "coredns-66bc5c9577-4c2vb" [1372d4eb-13df-43ba-add1-18330c9c110d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:21:22.294286  500265 system_pods.go:61] "etcd-default-k8s-diff-port-417984" [2f46d319-463a-4bf1-b9f0-33d017fe17c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:21:22.294292  500265 system_pods.go:61] "kindnet-s57gp" [c69cde96-0e11-4f41-a715-961981d36066] Running
	I1009 20:21:22.294300  500265 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-417984" [fff706cc-3c18-400c-9fb7-10cec1723bc7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:21:22.294307  500265 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-417984" [b8ecf531-e830-4a99-abcc-1fc8175c1598] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:21:22.294312  500265 system_pods.go:61] "kube-proxy-jnlzf" [c888f2c2-aaea-43d1-b81a-fe2762b4f733] Running
	I1009 20:21:22.294322  500265 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-417984" [27737a80-8846-4a8f-b4c6-2845ddca3cca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:21:22.294326  500265 system_pods.go:61] "storage-provisioner" [35085697-b4c2-4265-a1eb-2ced25791f19] Running
	I1009 20:21:22.294333  500265 system_pods.go:74] duration metric: took 11.563922ms to wait for pod list to return data ...
	I1009 20:21:22.294341  500265 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:21:22.307812  500265 default_sa.go:45] found service account: "default"
	I1009 20:21:22.307832  500265 default_sa.go:55] duration metric: took 13.485226ms for default service account to be created ...
	I1009 20:21:22.307841  500265 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:21:22.310843  500265 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1009 20:21:18.139366  501106 out.go:252]   - Generating certificates and keys ...
	I1009 20:21:18.139490  501106 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 20:21:18.139583  501106 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 20:21:18.605614  501106 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 20:21:18.984599  501106 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 20:21:19.380070  501106 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 20:21:19.502808  501106 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 20:21:20.024765  501106 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 20:21:20.025546  501106 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-160257] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 20:21:20.232272  501106 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 20:21:20.232815  501106 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-160257] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1009 20:21:20.598514  501106 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 20:21:21.369408  501106 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 20:21:21.583815  501106 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 20:21:21.584406  501106 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:21:22.172440  501106 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:21:22.313199  500265 system_pods.go:86] 8 kube-system pods found
	I1009 20:21:22.313235  500265 system_pods.go:89] "coredns-66bc5c9577-4c2vb" [1372d4eb-13df-43ba-add1-18330c9c110d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:21:22.313246  500265 system_pods.go:89] "etcd-default-k8s-diff-port-417984" [2f46d319-463a-4bf1-b9f0-33d017fe17c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:21:22.313252  500265 system_pods.go:89] "kindnet-s57gp" [c69cde96-0e11-4f41-a715-961981d36066] Running
	I1009 20:21:22.313262  500265 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417984" [fff706cc-3c18-400c-9fb7-10cec1723bc7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:21:22.313268  500265 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417984" [b8ecf531-e830-4a99-abcc-1fc8175c1598] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:21:22.313273  500265 system_pods.go:89] "kube-proxy-jnlzf" [c888f2c2-aaea-43d1-b81a-fe2762b4f733] Running
	I1009 20:21:22.313289  500265 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417984" [27737a80-8846-4a8f-b4c6-2845ddca3cca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:21:22.313298  500265 system_pods.go:89] "storage-provisioner" [35085697-b4c2-4265-a1eb-2ced25791f19] Running
	I1009 20:21:22.313306  500265 system_pods.go:126] duration metric: took 5.459434ms to wait for k8s-apps to be running ...
	I1009 20:21:22.313314  500265 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:21:22.313368  500265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:21:22.314557  500265 addons.go:514] duration metric: took 10.715141076s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1009 20:21:22.353716  500265 system_svc.go:56] duration metric: took 40.39268ms WaitForService to wait for kubelet
	I1009 20:21:22.353741  500265 kubeadm.go:586] duration metric: took 10.754698168s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:21:22.353760  500265 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:21:22.370026  500265 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:21:22.370105  500265 node_conditions.go:123] node cpu capacity is 2
	I1009 20:21:22.370133  500265 node_conditions.go:105] duration metric: took 16.366256ms to run NodePressure ...
	I1009 20:21:22.370162  500265 start.go:242] waiting for startup goroutines ...
	I1009 20:21:22.370200  500265 start.go:247] waiting for cluster config update ...
	I1009 20:21:22.370227  500265 start.go:256] writing updated cluster config ...
	I1009 20:21:22.370542  500265 ssh_runner.go:195] Run: rm -f paused
	I1009 20:21:22.377902  500265 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:21:22.386578  500265 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4c2vb" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 20:21:24.399189  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	I1009 20:21:22.481240  501106 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:21:23.173417  501106 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:21:23.837487  501106 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:21:24.604771  501106 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:21:24.604873  501106 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:21:24.608060  501106 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:21:24.611868  501106 out.go:252]   - Booting up control plane ...
	I1009 20:21:24.611975  501106 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:21:24.612055  501106 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:21:24.612126  501106 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:21:24.632963  501106 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:21:24.633075  501106 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 20:21:24.645856  501106 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 20:21:24.645962  501106 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:21:24.646003  501106 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 20:21:24.832371  501106 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:21:24.832555  501106 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:21:25.833463  501106 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001421898s
	I1009 20:21:25.836770  501106 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 20:21:25.836868  501106 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1009 20:21:25.836961  501106 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 20:21:25.837043  501106 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1009 20:21:26.899087  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	W1009 20:21:29.396903  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	I1009 20:21:28.961634  501106 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.124343157s
	I1009 20:21:32.305990  501106 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.469237344s
	I1009 20:21:34.340161  501106 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.503275071s
	I1009 20:21:34.370332  501106 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 20:21:34.401948  501106 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 20:21:34.424413  501106 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 20:21:34.424940  501106 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-160257 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 20:21:34.439959  501106 kubeadm.go:318] [bootstrap-token] Using token: mvaqdp.yj8gv4bhmp43vmy6
	W1009 20:21:31.398454  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	W1009 20:21:33.897175  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	I1009 20:21:34.442907  501106 out.go:252]   - Configuring RBAC rules ...
	I1009 20:21:34.443033  501106 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 20:21:34.453798  501106 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 20:21:34.470101  501106 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 20:21:34.481826  501106 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 20:21:34.491432  501106 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 20:21:34.497842  501106 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 20:21:34.747856  501106 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 20:21:35.196530  501106 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 20:21:35.748421  501106 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 20:21:35.750092  501106 kubeadm.go:318] 
	I1009 20:21:35.750167  501106 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 20:21:35.750173  501106 kubeadm.go:318] 
	I1009 20:21:35.750249  501106 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 20:21:35.750254  501106 kubeadm.go:318] 
	I1009 20:21:35.750280  501106 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 20:21:35.750922  501106 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 20:21:35.750989  501106 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 20:21:35.750996  501106 kubeadm.go:318] 
	I1009 20:21:35.751050  501106 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 20:21:35.751054  501106 kubeadm.go:318] 
	I1009 20:21:35.751102  501106 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 20:21:35.751106  501106 kubeadm.go:318] 
	I1009 20:21:35.751158  501106 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 20:21:35.751252  501106 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 20:21:35.751328  501106 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 20:21:35.751333  501106 kubeadm.go:318] 
	I1009 20:21:35.751674  501106 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 20:21:35.751761  501106 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 20:21:35.751766  501106 kubeadm.go:318] 
	I1009 20:21:35.752084  501106 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token mvaqdp.yj8gv4bhmp43vmy6 \
	I1009 20:21:35.752192  501106 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e766d16640f098061f552dd476e80ebd3809bd57b4957045222f32c55d34903e \
	I1009 20:21:35.752423  501106 kubeadm.go:318] 	--control-plane 
	I1009 20:21:35.752434  501106 kubeadm.go:318] 
	I1009 20:21:35.752728  501106 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 20:21:35.752739  501106 kubeadm.go:318] 
	I1009 20:21:35.753051  501106 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token mvaqdp.yj8gv4bhmp43vmy6 \
	I1009 20:21:35.753378  501106 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e766d16640f098061f552dd476e80ebd3809bd57b4957045222f32c55d34903e 
	I1009 20:21:35.763987  501106 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1009 20:21:35.764403  501106 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1009 20:21:35.764535  501106 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:21:35.764547  501106 cni.go:84] Creating CNI manager for ""
	I1009 20:21:35.764555  501106 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:21:35.767955  501106 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1009 20:21:35.771041  501106 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 20:21:35.779899  501106 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1009 20:21:35.779916  501106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 20:21:35.807892  501106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 20:21:36.294380  501106 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:21:36.294534  501106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:36.294623  501106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-160257 minikube.k8s.io/updated_at=2025_10_09T20_21_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb minikube.k8s.io/name=newest-cni-160257 minikube.k8s.io/primary=true
	I1009 20:21:36.693381  501106 ops.go:34] apiserver oom_adj: -16
	I1009 20:21:36.693484  501106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:37.193941  501106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1009 20:21:36.397431  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	W1009 20:21:38.893011  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	I1009 20:21:37.693538  501106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:38.194329  501106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:38.694276  501106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:39.194072  501106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:39.693586  501106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:40.194338  501106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:40.693931  501106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:41.193640  501106 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:41.297221  501106 kubeadm.go:1113] duration metric: took 5.00274176s to wait for elevateKubeSystemPrivileges
	I1009 20:21:41.297249  501106 kubeadm.go:402] duration metric: took 23.609755408s to StartCluster
	I1009 20:21:41.297266  501106 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:41.297326  501106 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:21:41.298381  501106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:41.298601  501106 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:21:41.298730  501106 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 20:21:41.298981  501106 config.go:182] Loaded profile config "newest-cni-160257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:21:41.299015  501106 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:21:41.299073  501106 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-160257"
	I1009 20:21:41.299090  501106 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-160257"
	I1009 20:21:41.299111  501106 host.go:66] Checking if "newest-cni-160257" exists ...
	I1009 20:21:41.299603  501106 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:41.299916  501106 addons.go:69] Setting default-storageclass=true in profile "newest-cni-160257"
	I1009 20:21:41.299934  501106 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-160257"
	I1009 20:21:41.300197  501106 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:41.304369  501106 out.go:179] * Verifying Kubernetes components...
	I1009 20:21:41.307791  501106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:41.341898  501106 addons.go:238] Setting addon default-storageclass=true in "newest-cni-160257"
	I1009 20:21:41.341941  501106 host.go:66] Checking if "newest-cni-160257" exists ...
	I1009 20:21:41.342378  501106 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:41.351244  501106 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:21:41.354173  501106 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:41.354194  501106 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:21:41.354264  501106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:41.374873  501106 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:41.374898  501106 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:21:41.374968  501106 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:41.393528  501106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:41.420169  501106 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:41.665235  501106 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 20:21:41.684112  501106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:21:41.694316  501106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:41.723929  501106 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:42.343923  501106 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1009 20:21:42.347475  501106 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:21:42.347588  501106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:42.658002  501106 api_server.go:72] duration metric: took 1.359372541s to wait for apiserver process to appear ...
	I1009 20:21:42.658029  501106 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:21:42.658049  501106 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:21:42.661568  501106 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1009 20:21:42.665352  501106 addons.go:514] duration metric: took 1.366318147s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1009 20:21:42.673241  501106 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1009 20:21:42.678022  501106 api_server.go:141] control plane version: v1.34.1
	I1009 20:21:42.678196  501106 api_server.go:131] duration metric: took 20.158261ms to wait for apiserver health ...
	I1009 20:21:42.678223  501106 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:21:42.692323  501106 system_pods.go:59] 8 kube-system pods found
	I1009 20:21:42.692428  501106 system_pods.go:61] "coredns-66bc5c9577-h6jjt" [48d28596-1503-4675-b84d-a0770eea0d66] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1009 20:21:42.692458  501106 system_pods.go:61] "etcd-newest-cni-160257" [7c59b451-dfcc-492f-a84f-2b02319332fb] Running
	I1009 20:21:42.692503  501106 system_pods.go:61] "kindnet-bgspl" [d8f6a466-a843-4773-968c-86550cdbe807] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1009 20:21:42.692530  501106 system_pods.go:61] "kube-apiserver-newest-cni-160257" [12beea36-feb5-44e6-8093-e6627a7c0bc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:21:42.692561  501106 system_pods.go:61] "kube-controller-manager-newest-cni-160257" [d721fd3e-4510-4c9d-8156-1389f2c157e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:21:42.692592  501106 system_pods.go:61] "kube-proxy-q5mpb" [efd41b4d-05f4-4870-b04c-cca5ec803e68] Running
	I1009 20:21:42.692616  501106 system_pods.go:61] "kube-scheduler-newest-cni-160257" [80050cec-2104-4888-a8e1-611f33e21d87] Running
	I1009 20:21:42.692638  501106 system_pods.go:61] "storage-provisioner" [d17148c8-3517-4026-aa73-4a1705edbddf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1009 20:21:42.692676  501106 system_pods.go:74] duration metric: took 14.434711ms to wait for pod list to return data ...
	I1009 20:21:42.692703  501106 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:21:42.702032  501106 default_sa.go:45] found service account: "default"
	I1009 20:21:42.702109  501106 default_sa.go:55] duration metric: took 9.384734ms for default service account to be created ...
	I1009 20:21:42.702137  501106 kubeadm.go:586] duration metric: took 1.403512341s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 20:21:42.702185  501106 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:21:42.710521  501106 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:21:42.710600  501106 node_conditions.go:123] node cpu capacity is 2
	I1009 20:21:42.710630  501106 node_conditions.go:105] duration metric: took 8.420693ms to run NodePressure ...
	I1009 20:21:42.710674  501106 start.go:242] waiting for startup goroutines ...
	I1009 20:21:42.848558  501106 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-160257" context rescaled to 1 replicas
	I1009 20:21:42.848636  501106 start.go:247] waiting for cluster config update ...
	I1009 20:21:42.848663  501106 start.go:256] writing updated cluster config ...
	I1009 20:21:42.849013  501106 ssh_runner.go:195] Run: rm -f paused
	I1009 20:21:42.935726  501106 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:21:42.939106  501106 out.go:179] * Done! kubectl is now configured to use "newest-cni-160257" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 20:21:41 newest-cni-160257 crio[841]: time="2025-10-09T20:21:41.859424337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:21:41 newest-cni-160257 crio[841]: time="2025-10-09T20:21:41.867483894Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ed133fd7-f37e-4926-bc05-2fd5cff01589 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:21:41 newest-cni-160257 crio[841]: time="2025-10-09T20:21:41.882539896Z" level=info msg="Ran pod sandbox 0c3525af5d49af1a700badbe679ed49b8510b67267a04a2b69b14146c0b5f9eb with infra container: kube-system/kindnet-bgspl/POD" id=ed133fd7-f37e-4926-bc05-2fd5cff01589 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:21:41 newest-cni-160257 crio[841]: time="2025-10-09T20:21:41.884431226Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=843d0d15-c31e-465b-a970-b8c795d7e063 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:21:41 newest-cni-160257 crio[841]: time="2025-10-09T20:21:41.894615511Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5c0f0fe5-5d27-4c8b-88ed-f3f058891659 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:21:41 newest-cni-160257 crio[841]: time="2025-10-09T20:21:41.901048955Z" level=info msg="Creating container: kube-system/kindnet-bgspl/kindnet-cni" id=4df846ed-18b5-40e1-9de7-6a6cabb437d7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:21:41 newest-cni-160257 crio[841]: time="2025-10-09T20:21:41.901484833Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:21:41 newest-cni-160257 crio[841]: time="2025-10-09T20:21:41.908216053Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:21:41 newest-cni-160257 crio[841]: time="2025-10-09T20:21:41.909214687Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:21:41 newest-cni-160257 crio[841]: time="2025-10-09T20:21:41.940246613Z" level=info msg="Created container 90b9efe257a3ae40badab11985b009a9d5e33de58826868550824e3d0d98ff0d: kube-system/kindnet-bgspl/kindnet-cni" id=4df846ed-18b5-40e1-9de7-6a6cabb437d7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:21:41 newest-cni-160257 crio[841]: time="2025-10-09T20:21:41.944264198Z" level=info msg="Starting container: 90b9efe257a3ae40badab11985b009a9d5e33de58826868550824e3d0d98ff0d" id=76b2ccd6-0f85-4776-b4dc-29a8e8fdbf6a name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:21:41 newest-cni-160257 crio[841]: time="2025-10-09T20:21:41.95874101Z" level=info msg="Started container" PID=1484 containerID=90b9efe257a3ae40badab11985b009a9d5e33de58826868550824e3d0d98ff0d description=kube-system/kindnet-bgspl/kindnet-cni id=76b2ccd6-0f85-4776-b4dc-29a8e8fdbf6a name=/runtime.v1.RuntimeService/StartContainer sandboxID=0c3525af5d49af1a700badbe679ed49b8510b67267a04a2b69b14146c0b5f9eb
	Oct 09 20:21:42 newest-cni-160257 crio[841]: time="2025-10-09T20:21:42.085792705Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-q5mpb/POD" id=67d7abcc-f611-4bdf-8602-d0401a50cf35 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:21:42 newest-cni-160257 crio[841]: time="2025-10-09T20:21:42.08586194Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:21:42 newest-cni-160257 crio[841]: time="2025-10-09T20:21:42.094078963Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=67d7abcc-f611-4bdf-8602-d0401a50cf35 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:21:42 newest-cni-160257 crio[841]: time="2025-10-09T20:21:42.101077977Z" level=info msg="Ran pod sandbox ca0abba41aca4dabed2bc85b000241c2bb7ecdc59272c4a745beae145afc5b98 with infra container: kube-system/kube-proxy-q5mpb/POD" id=67d7abcc-f611-4bdf-8602-d0401a50cf35 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:21:42 newest-cni-160257 crio[841]: time="2025-10-09T20:21:42.107576792Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=8098cb23-355e-4479-8d7a-3bedaf7cc283 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:21:42 newest-cni-160257 crio[841]: time="2025-10-09T20:21:42.111424792Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b89204ae-b39d-4394-87e5-c85466e833db name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:21:42 newest-cni-160257 crio[841]: time="2025-10-09T20:21:42.122762363Z" level=info msg="Creating container: kube-system/kube-proxy-q5mpb/kube-proxy" id=128ba999-70bb-4719-8c66-d8423bce2c78 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:21:42 newest-cni-160257 crio[841]: time="2025-10-09T20:21:42.123154458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:21:42 newest-cni-160257 crio[841]: time="2025-10-09T20:21:42.129707846Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:21:42 newest-cni-160257 crio[841]: time="2025-10-09T20:21:42.131581117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:21:42 newest-cni-160257 crio[841]: time="2025-10-09T20:21:42.228487436Z" level=info msg="Created container 342d2d97c3c4a713bf8e4db779a3fa88bec809b6a439fad5571eb1a2544ced92: kube-system/kube-proxy-q5mpb/kube-proxy" id=128ba999-70bb-4719-8c66-d8423bce2c78 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:21:42 newest-cni-160257 crio[841]: time="2025-10-09T20:21:42.230597576Z" level=info msg="Starting container: 342d2d97c3c4a713bf8e4db779a3fa88bec809b6a439fad5571eb1a2544ced92" id=31390167-d1b7-4024-b420-61066da8da52 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:21:42 newest-cni-160257 crio[841]: time="2025-10-09T20:21:42.234308606Z" level=info msg="Started container" PID=1503 containerID=342d2d97c3c4a713bf8e4db779a3fa88bec809b6a439fad5571eb1a2544ced92 description=kube-system/kube-proxy-q5mpb/kube-proxy id=31390167-d1b7-4024-b420-61066da8da52 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ca0abba41aca4dabed2bc85b000241c2bb7ecdc59272c4a745beae145afc5b98
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	342d2d97c3c4a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   ca0abba41aca4       kube-proxy-q5mpb                            kube-system
	90b9efe257a3a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   0c3525af5d49a       kindnet-bgspl                               kube-system
	e4d81f665c56c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   18 seconds ago      Running             kube-controller-manager   0                   4c418bb247afd       kube-controller-manager-newest-cni-160257   kube-system
	24f5971848eaf       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   18 seconds ago      Running             etcd                      0                   98d43ca5a3473       etcd-newest-cni-160257                      kube-system
	ccfff3b2782fc       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   18 seconds ago      Running             kube-apiserver            0                   c22b9459aceb1       kube-apiserver-newest-cni-160257            kube-system
	b89710ec150c4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   18 seconds ago      Running             kube-scheduler            0                   5a0cdb46a45dc       kube-scheduler-newest-cni-160257            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-160257
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-160257
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=newest-cni-160257
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T20_21_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 20:21:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-160257
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 20:21:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 20:21:35 +0000   Thu, 09 Oct 2025 20:21:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 20:21:35 +0000   Thu, 09 Oct 2025 20:21:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 20:21:35 +0000   Thu, 09 Oct 2025 20:21:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 09 Oct 2025 20:21:35 +0000   Thu, 09 Oct 2025 20:21:27 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-160257
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 e1bc3e5b177c42c0958e54fa9db66c30
	  System UUID:                0382347f-ca4b-4cf8-b386-5e98e49e227d
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-160257                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-bgspl                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-160257             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-160257    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 kube-proxy-q5mpb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-160257             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node newest-cni-160257 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node newest-cni-160257 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19s (x8 over 19s)  kubelet          Node newest-cni-160257 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-160257 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-160257 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s                 kubelet          Node newest-cni-160257 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-160257 event: Registered Node newest-cni-160257 in Controller
	
	
	==> dmesg <==
	[ +27.967875] overlayfs: idmapped layers are currently not supported
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:04] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:15] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:16] overlayfs: idmapped layers are currently not supported
	[ +23.810739] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:18] overlayfs: idmapped layers are currently not supported
	[ +26.082927] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:19] overlayfs: idmapped layers are currently not supported
	[ +21.956614] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:21] overlayfs: idmapped layers are currently not supported
	[ +16.062221] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [24f5971848eafbbc6c58fbd0bc3b00784e8f2579a972bcd5da30f4f6e4e3dd61] <==
	{"level":"warn","ts":"2025-10-09T20:21:30.356508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.396728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.427547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.443799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.471604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.492296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.521916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.550616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.559962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.608116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.609319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.628650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.644157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.675772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.695231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.715163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.735396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.762793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.811480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.847867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.860487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.890426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.947109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:30.954113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:31.093860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37726","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:21:44 up  3:04,  0 user,  load average: 5.42, 3.64, 2.42
	Linux newest-cni-160257 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [90b9efe257a3ae40badab11985b009a9d5e33de58826868550824e3d0d98ff0d] <==
	I1009 20:21:42.111400       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:21:42.112302       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1009 20:21:42.112459       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:21:42.112473       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:21:42.112490       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:21:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:21:42.404405       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:21:42.404441       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:21:42.404450       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:21:42.405722       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [ccfff3b2782fc22581280b4c0fc00fdf19a147ec4d96c5beb6b3fca920f75351] <==
	E1009 20:21:32.380655       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1009 20:21:32.428116       1 controller.go:667] quota admission added evaluator for: namespaces
	E1009 20:21:32.460579       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1009 20:21:32.491174       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:21:32.491287       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1009 20:21:32.518958       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 20:21:32.541533       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:21:32.673858       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 20:21:33.024269       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1009 20:21:33.029505       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1009 20:21:33.029591       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:21:34.155832       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:21:34.220712       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:21:34.337608       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1009 20:21:34.368552       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1009 20:21:34.375533       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 20:21:34.388365       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 20:21:35.150188       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 20:21:35.163682       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 20:21:35.194222       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1009 20:21:35.207748       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1009 20:21:41.040380       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 20:21:41.295207       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1009 20:21:41.577945       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:21:41.614899       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [e4d81f665c56ca3f2ff6b08acf1d137726ff57f7d3aa27282624da101c939a41] <==
	I1009 20:21:40.345389       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 20:21:40.347314       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 20:21:40.349977       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1009 20:21:40.351254       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 20:21:40.366764       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 20:21:40.373175       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:21:40.380796       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1009 20:21:40.381780       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1009 20:21:40.381828       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:21:40.381850       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 20:21:40.381858       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 20:21:40.381926       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 20:21:40.381966       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1009 20:21:40.381964       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 20:21:40.382324       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-160257"
	I1009 20:21:40.382365       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1009 20:21:40.385336       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 20:21:40.385371       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 20:21:40.386513       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 20:21:40.386672       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 20:21:40.388548       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1009 20:21:40.392217       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1009 20:21:40.394468       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1009 20:21:40.400786       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 20:21:40.416294       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [342d2d97c3c4a713bf8e4db779a3fa88bec809b6a439fad5571eb1a2544ced92] <==
	I1009 20:21:42.372318       1 server_linux.go:53] "Using iptables proxy"
	I1009 20:21:42.538865       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 20:21:42.648723       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 20:21:42.648757       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1009 20:21:42.648822       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:21:42.717031       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:21:42.717254       1 server_linux.go:132] "Using iptables Proxier"
	I1009 20:21:42.722805       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:21:42.723294       1 server.go:527] "Version info" version="v1.34.1"
	I1009 20:21:42.723311       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:21:42.725753       1 config.go:200] "Starting service config controller"
	I1009 20:21:42.726054       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 20:21:42.726126       1 config.go:106] "Starting endpoint slice config controller"
	I1009 20:21:42.726180       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 20:21:42.726276       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 20:21:42.726307       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 20:21:42.727181       1 config.go:309] "Starting node config controller"
	I1009 20:21:42.727238       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 20:21:42.727269       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 20:21:42.826438       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 20:21:42.826563       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1009 20:21:42.826608       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b89710ec150c40ade1a2ed21a8a8e964bf010706e9147d373179d335339c96b7] <==
	E1009 20:21:32.326500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 20:21:32.326596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 20:21:32.326651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1009 20:21:32.326696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 20:21:32.326817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 20:21:32.326881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 20:21:32.328507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 20:21:32.330376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 20:21:32.330439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 20:21:32.330454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 20:21:33.139868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 20:21:33.172590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1009 20:21:33.281134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 20:21:33.284460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1009 20:21:33.324410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 20:21:33.474348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1009 20:21:33.474415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 20:21:33.475966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 20:21:33.522263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 20:21:33.547705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 20:21:33.569340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 20:21:33.597943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 20:21:33.698168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 20:21:33.831895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1009 20:21:35.864549       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 20:21:35 newest-cni-160257 kubelet[1293]: I1009 20:21:35.749635    1293 kubelet_node_status.go:75] "Attempting to register node" node="newest-cni-160257"
	Oct 09 20:21:35 newest-cni-160257 kubelet[1293]: I1009 20:21:35.799374    1293 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-160257"
	Oct 09 20:21:35 newest-cni-160257 kubelet[1293]: I1009 20:21:35.799477    1293 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-160257"
	Oct 09 20:21:36 newest-cni-160257 kubelet[1293]: I1009 20:21:36.246532    1293 apiserver.go:52] "Watching apiserver"
	Oct 09 20:21:36 newest-cni-160257 kubelet[1293]: I1009 20:21:36.315764    1293 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 09 20:21:36 newest-cni-160257 kubelet[1293]: I1009 20:21:36.412015    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-160257" podStartSLOduration=1.411992592 podStartE2EDuration="1.411992592s" podCreationTimestamp="2025-10-09 20:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:21:36.375227096 +0000 UTC m=+1.321853914" watchObservedRunningTime="2025-10-09 20:21:36.411992592 +0000 UTC m=+1.358619517"
	Oct 09 20:21:36 newest-cni-160257 kubelet[1293]: I1009 20:21:36.412185    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-160257" podStartSLOduration=3.412178162 podStartE2EDuration="3.412178162s" podCreationTimestamp="2025-10-09 20:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:21:36.410688412 +0000 UTC m=+1.357315238" watchObservedRunningTime="2025-10-09 20:21:36.412178162 +0000 UTC m=+1.358804980"
	Oct 09 20:21:36 newest-cni-160257 kubelet[1293]: I1009 20:21:36.456673    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-160257" podStartSLOduration=1.456641389 podStartE2EDuration="1.456641389s" podCreationTimestamp="2025-10-09 20:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:21:36.456534853 +0000 UTC m=+1.403161671" watchObservedRunningTime="2025-10-09 20:21:36.456641389 +0000 UTC m=+1.403268207"
	Oct 09 20:21:36 newest-cni-160257 kubelet[1293]: I1009 20:21:36.456921    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-160257" podStartSLOduration=1.4569147550000001 podStartE2EDuration="1.456914755s" podCreationTimestamp="2025-10-09 20:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:21:36.433796627 +0000 UTC m=+1.380423470" watchObservedRunningTime="2025-10-09 20:21:36.456914755 +0000 UTC m=+1.403541573"
	Oct 09 20:21:36 newest-cni-160257 kubelet[1293]: I1009 20:21:36.624450    1293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-160257"
	Oct 09 20:21:36 newest-cni-160257 kubelet[1293]: E1009 20:21:36.711560    1293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-160257\" already exists" pod="kube-system/kube-scheduler-newest-cni-160257"
	Oct 09 20:21:40 newest-cni-160257 kubelet[1293]: I1009 20:21:40.383772    1293 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 09 20:21:40 newest-cni-160257 kubelet[1293]: I1009 20:21:40.384418    1293 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 09 20:21:41 newest-cni-160257 kubelet[1293]: I1009 20:21:41.593624    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/efd41b4d-05f4-4870-b04c-cca5ec803e68-kube-proxy\") pod \"kube-proxy-q5mpb\" (UID: \"efd41b4d-05f4-4870-b04c-cca5ec803e68\") " pod="kube-system/kube-proxy-q5mpb"
	Oct 09 20:21:41 newest-cni-160257 kubelet[1293]: I1009 20:21:41.593675    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8f6a466-a843-4773-968c-86550cdbe807-lib-modules\") pod \"kindnet-bgspl\" (UID: \"d8f6a466-a843-4773-968c-86550cdbe807\") " pod="kube-system/kindnet-bgspl"
	Oct 09 20:21:41 newest-cni-160257 kubelet[1293]: I1009 20:21:41.593710    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efd41b4d-05f4-4870-b04c-cca5ec803e68-lib-modules\") pod \"kube-proxy-q5mpb\" (UID: \"efd41b4d-05f4-4870-b04c-cca5ec803e68\") " pod="kube-system/kube-proxy-q5mpb"
	Oct 09 20:21:41 newest-cni-160257 kubelet[1293]: I1009 20:21:41.593731    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efd41b4d-05f4-4870-b04c-cca5ec803e68-xtables-lock\") pod \"kube-proxy-q5mpb\" (UID: \"efd41b4d-05f4-4870-b04c-cca5ec803e68\") " pod="kube-system/kube-proxy-q5mpb"
	Oct 09 20:21:41 newest-cni-160257 kubelet[1293]: I1009 20:21:41.593749    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm542\" (UniqueName: \"kubernetes.io/projected/efd41b4d-05f4-4870-b04c-cca5ec803e68-kube-api-access-tm542\") pod \"kube-proxy-q5mpb\" (UID: \"efd41b4d-05f4-4870-b04c-cca5ec803e68\") " pod="kube-system/kube-proxy-q5mpb"
	Oct 09 20:21:41 newest-cni-160257 kubelet[1293]: I1009 20:21:41.593783    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d8f6a466-a843-4773-968c-86550cdbe807-cni-cfg\") pod \"kindnet-bgspl\" (UID: \"d8f6a466-a843-4773-968c-86550cdbe807\") " pod="kube-system/kindnet-bgspl"
	Oct 09 20:21:41 newest-cni-160257 kubelet[1293]: I1009 20:21:41.593801    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8lx2\" (UniqueName: \"kubernetes.io/projected/d8f6a466-a843-4773-968c-86550cdbe807-kube-api-access-v8lx2\") pod \"kindnet-bgspl\" (UID: \"d8f6a466-a843-4773-968c-86550cdbe807\") " pod="kube-system/kindnet-bgspl"
	Oct 09 20:21:41 newest-cni-160257 kubelet[1293]: I1009 20:21:41.593835    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8f6a466-a843-4773-968c-86550cdbe807-xtables-lock\") pod \"kindnet-bgspl\" (UID: \"d8f6a466-a843-4773-968c-86550cdbe807\") " pod="kube-system/kindnet-bgspl"
	Oct 09 20:21:41 newest-cni-160257 kubelet[1293]: I1009 20:21:41.786498    1293 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 09 20:21:41 newest-cni-160257 kubelet[1293]: W1009 20:21:41.877007    1293 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7/crio-0c3525af5d49af1a700badbe679ed49b8510b67267a04a2b69b14146c0b5f9eb WatchSource:0}: Error finding container 0c3525af5d49af1a700badbe679ed49b8510b67267a04a2b69b14146c0b5f9eb: Status 404 returned error can't find the container with id 0c3525af5d49af1a700badbe679ed49b8510b67267a04a2b69b14146c0b5f9eb
	Oct 09 20:21:42 newest-cni-160257 kubelet[1293]: I1009 20:21:42.707542    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q5mpb" podStartSLOduration=1.7075253849999998 podStartE2EDuration="1.707525385s" podCreationTimestamp="2025-10-09 20:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:21:42.669129879 +0000 UTC m=+7.615756730" watchObservedRunningTime="2025-10-09 20:21:42.707525385 +0000 UTC m=+7.654152204"
	Oct 09 20:21:42 newest-cni-160257 kubelet[1293]: I1009 20:21:42.707661    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bgspl" podStartSLOduration=1.707655406 podStartE2EDuration="1.707655406s" podCreationTimestamp="2025-10-09 20:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-09 20:21:42.707089015 +0000 UTC m=+7.653715841" watchObservedRunningTime="2025-10-09 20:21:42.707655406 +0000 UTC m=+7.654282232"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-160257 -n newest-cni-160257
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-160257 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-h6jjt storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-160257 describe pod coredns-66bc5c9577-h6jjt storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-160257 describe pod coredns-66bc5c9577-h6jjt storage-provisioner: exit status 1 (110.019723ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-h6jjt" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-160257 describe pod coredns-66bc5c9577-h6jjt storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-160257 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-160257 --alsologtostderr -v=1: exit status 80 (2.535163391s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-160257 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 20:22:03.654911  507543 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:22:03.655091  507543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:22:03.655103  507543 out.go:374] Setting ErrFile to fd 2...
	I1009 20:22:03.655109  507543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:22:03.655382  507543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:22:03.655642  507543 out.go:368] Setting JSON to false
	I1009 20:22:03.655669  507543 mustload.go:65] Loading cluster: newest-cni-160257
	I1009 20:22:03.656062  507543 config.go:182] Loaded profile config "newest-cni-160257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:22:03.656518  507543 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:22:03.674345  507543 host.go:66] Checking if "newest-cni-160257" exists ...
	I1009 20:22:03.674962  507543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:22:03.739293  507543 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 20:22:03.72712797 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:22:03.740097  507543 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-160257 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1009 20:22:03.743839  507543 out.go:179] * Pausing node newest-cni-160257 ... 
	I1009 20:22:03.746975  507543 host.go:66] Checking if "newest-cni-160257" exists ...
	I1009 20:22:03.747345  507543 ssh_runner.go:195] Run: systemctl --version
	I1009 20:22:03.747397  507543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:22:03.767278  507543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:22:03.874504  507543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:03.891129  507543 pause.go:52] kubelet running: true
	I1009 20:22:03.891205  507543 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:22:04.200678  507543 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:22:04.200763  507543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:22:04.272931  507543 cri.go:89] found id: "ce12541f225c961a2d52993ca9aeb9b77f4d1c3c1d6fd17ff705960d66604ae6"
	I1009 20:22:04.272953  507543 cri.go:89] found id: "e69f5d91a1b96da308ea7785822c1e647575c92e17d9a695bf51f239b3fc3ccd"
	I1009 20:22:04.272959  507543 cri.go:89] found id: "c44d38264b4c8f90676162945fe05a02624867f517df88a401a8ae08e56998fc"
	I1009 20:22:04.272963  507543 cri.go:89] found id: "abf4f184374a54e8b81747413f26453de47bd605a2aeb2a0889c7f019dc40141"
	I1009 20:22:04.272967  507543 cri.go:89] found id: "8e9d85a685b554c78f90ad52ce9e2e08feb85c5a0c3c0cecaa44409529755644"
	I1009 20:22:04.272971  507543 cri.go:89] found id: "1bb5609884005775d5cb2c3c1d622130225e6c83a8497006aa8e75133f859524"
	I1009 20:22:04.272974  507543 cri.go:89] found id: ""
	I1009 20:22:04.273043  507543 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:22:04.286233  507543 retry.go:31] will retry after 146.99538ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:22:04Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:22:04.433617  507543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:04.448771  507543 pause.go:52] kubelet running: false
	I1009 20:22:04.448860  507543 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:22:04.646458  507543 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:22:04.646587  507543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:22:04.769384  507543 cri.go:89] found id: "ce12541f225c961a2d52993ca9aeb9b77f4d1c3c1d6fd17ff705960d66604ae6"
	I1009 20:22:04.769454  507543 cri.go:89] found id: "e69f5d91a1b96da308ea7785822c1e647575c92e17d9a695bf51f239b3fc3ccd"
	I1009 20:22:04.769480  507543 cri.go:89] found id: "c44d38264b4c8f90676162945fe05a02624867f517df88a401a8ae08e56998fc"
	I1009 20:22:04.769505  507543 cri.go:89] found id: "abf4f184374a54e8b81747413f26453de47bd605a2aeb2a0889c7f019dc40141"
	I1009 20:22:04.769536  507543 cri.go:89] found id: "8e9d85a685b554c78f90ad52ce9e2e08feb85c5a0c3c0cecaa44409529755644"
	I1009 20:22:04.769562  507543 cri.go:89] found id: "1bb5609884005775d5cb2c3c1d622130225e6c83a8497006aa8e75133f859524"
	I1009 20:22:04.769584  507543 cri.go:89] found id: ""
	I1009 20:22:04.769679  507543 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:22:04.782378  507543 retry.go:31] will retry after 203.632237ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:22:04Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:22:04.986924  507543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:05.009634  507543 pause.go:52] kubelet running: false
	I1009 20:22:05.009726  507543 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:22:05.167751  507543 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:22:05.167946  507543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:22:05.239169  507543 cri.go:89] found id: "ce12541f225c961a2d52993ca9aeb9b77f4d1c3c1d6fd17ff705960d66604ae6"
	I1009 20:22:05.239192  507543 cri.go:89] found id: "e69f5d91a1b96da308ea7785822c1e647575c92e17d9a695bf51f239b3fc3ccd"
	I1009 20:22:05.239198  507543 cri.go:89] found id: "c44d38264b4c8f90676162945fe05a02624867f517df88a401a8ae08e56998fc"
	I1009 20:22:05.239203  507543 cri.go:89] found id: "abf4f184374a54e8b81747413f26453de47bd605a2aeb2a0889c7f019dc40141"
	I1009 20:22:05.239207  507543 cri.go:89] found id: "8e9d85a685b554c78f90ad52ce9e2e08feb85c5a0c3c0cecaa44409529755644"
	I1009 20:22:05.239211  507543 cri.go:89] found id: "1bb5609884005775d5cb2c3c1d622130225e6c83a8497006aa8e75133f859524"
	I1009 20:22:05.239214  507543 cri.go:89] found id: ""
	I1009 20:22:05.239291  507543 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:22:05.251221  507543 retry.go:31] will retry after 618.712132ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:22:05Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:22:05.871141  507543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:05.884261  507543 pause.go:52] kubelet running: false
	I1009 20:22:05.884324  507543 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:22:06.033419  507543 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:22:06.033562  507543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:22:06.103050  507543 cri.go:89] found id: "ce12541f225c961a2d52993ca9aeb9b77f4d1c3c1d6fd17ff705960d66604ae6"
	I1009 20:22:06.103075  507543 cri.go:89] found id: "e69f5d91a1b96da308ea7785822c1e647575c92e17d9a695bf51f239b3fc3ccd"
	I1009 20:22:06.103081  507543 cri.go:89] found id: "c44d38264b4c8f90676162945fe05a02624867f517df88a401a8ae08e56998fc"
	I1009 20:22:06.103085  507543 cri.go:89] found id: "abf4f184374a54e8b81747413f26453de47bd605a2aeb2a0889c7f019dc40141"
	I1009 20:22:06.103088  507543 cri.go:89] found id: "8e9d85a685b554c78f90ad52ce9e2e08feb85c5a0c3c0cecaa44409529755644"
	I1009 20:22:06.103092  507543 cri.go:89] found id: "1bb5609884005775d5cb2c3c1d622130225e6c83a8497006aa8e75133f859524"
	I1009 20:22:06.103096  507543 cri.go:89] found id: ""
	I1009 20:22:06.103154  507543 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:22:06.118169  507543 out.go:203] 
	W1009 20:22:06.121185  507543 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:22:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:22:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 20:22:06.121209  507543 out.go:285] * 
	* 
	W1009 20:22:06.126948  507543 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:22:06.129665  507543 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-160257 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-160257
helpers_test.go:243: (dbg) docker inspect newest-cni-160257:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7",
	        "Created": "2025-10-09T20:21:08.350011602Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 505768,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:21:47.545091543Z",
	            "FinishedAt": "2025-10-09T20:21:46.373294825Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7/hosts",
	        "LogPath": "/var/lib/docker/containers/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7-json.log",
	        "Name": "/newest-cni-160257",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-160257:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-160257",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7",
	                "LowerDir": "/var/lib/docker/overlay2/c3f22559d24a79b75dffe8207445a15a01a15487326878a474489ec60730e13e-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c3f22559d24a79b75dffe8207445a15a01a15487326878a474489ec60730e13e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c3f22559d24a79b75dffe8207445a15a01a15487326878a474489ec60730e13e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c3f22559d24a79b75dffe8207445a15a01a15487326878a474489ec60730e13e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-160257",
	                "Source": "/var/lib/docker/volumes/newest-cni-160257/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-160257",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-160257",
	                "name.minikube.sigs.k8s.io": "newest-cni-160257",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d9106aab5038866665aec1d9cf1595beaf76e67fcf40a13bced8ad21a40fb654",
	            "SandboxKey": "/var/run/docker/netns/d9106aab5038",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-160257": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:d1:17:17:9c:f8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "870f78d9db06c84c4e340afedcf88d286d22b52c6864f8eefaae6f4f49447e55",
	                    "EndpointID": "c352e0783083d9ca3bbb1795bbe79f2fbde3bbaffdc19df66e814234184d5914",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-160257",
	                        "b09c68fc79ea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-160257 -n newest-cni-160257
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-160257 -n newest-cni-160257: exit status 2 (364.300952ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-160257 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-160257 logs -n 25: (1.161129769s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-020313 image list --format=json                                                                                                                                                                                                    │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │ 09 Oct 25 20:18 UTC │
	│ pause   │ -p no-preload-020313 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:18 UTC │                     │
	│ delete  │ -p no-preload-020313                                                                                                                                                                                                                          │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ delete  │ -p no-preload-020313                                                                                                                                                                                                                          │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ delete  │ -p disable-driver-mounts-613966                                                                                                                                                                                                               │ disable-driver-mounts-613966 │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ start   │ -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-565110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │                     │
	│ stop    │ -p embed-certs-565110 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-565110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ start   │ -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-417984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-417984 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:20 UTC │
	│ image   │ embed-certs-565110 image list --format=json                                                                                                                                                                                                   │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:20 UTC │
	│ pause   │ -p embed-certs-565110 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-417984 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:20 UTC │
	│ delete  │ -p embed-certs-565110                                                                                                                                                                                                                         │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:21 UTC │
	│ start   │ -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:21 UTC │
	│ delete  │ -p embed-certs-565110                                                                                                                                                                                                                         │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ start   │ -p newest-cni-160257 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-160257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │                     │
	│ stop    │ -p newest-cni-160257 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ addons  │ enable dashboard -p newest-cni-160257 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ start   │ -p newest-cni-160257 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:22 UTC │
	│ image   │ newest-cni-160257 image list --format=json                                                                                                                                                                                                    │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:22 UTC │ 09 Oct 25 20:22 UTC │
	│ pause   │ -p newest-cni-160257 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:21:47
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:21:47.225777  505641 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:21:47.225898  505641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:21:47.225907  505641 out.go:374] Setting ErrFile to fd 2...
	I1009 20:21:47.225913  505641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:21:47.226184  505641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:21:47.226608  505641 out.go:368] Setting JSON to false
	I1009 20:21:47.227622  505641 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11047,"bootTime":1760030261,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:21:47.227724  505641 start.go:143] virtualization:  
	I1009 20:21:47.232792  505641 out.go:179] * [newest-cni-160257] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:21:47.236046  505641 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:21:47.236091  505641 notify.go:221] Checking for updates...
	I1009 20:21:47.244521  505641 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:21:47.247836  505641 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:21:47.252638  505641 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:21:47.255969  505641 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:21:47.259775  505641 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:21:47.263692  505641 config.go:182] Loaded profile config "newest-cni-160257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:21:47.264262  505641 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:21:47.300892  505641 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:21:47.301004  505641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:21:47.375444  505641 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:21:47.364461238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:21:47.375566  505641 docker.go:319] overlay module found
	I1009 20:21:47.378702  505641 out.go:179] * Using the docker driver based on existing profile
	I1009 20:21:47.381764  505641 start.go:309] selected driver: docker
	I1009 20:21:47.381788  505641 start.go:930] validating driver "docker" against &{Name:newest-cni-160257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:21:47.381896  505641 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:21:47.382683  505641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:21:47.444269  505641 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:21:47.43523216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:21:47.444618  505641 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 20:21:47.444655  505641 cni.go:84] Creating CNI manager for ""
	I1009 20:21:47.444713  505641 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:21:47.444753  505641 start.go:353] cluster config:
	{Name:newest-cni-160257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:21:47.449762  505641 out.go:179] * Starting "newest-cni-160257" primary control-plane node in "newest-cni-160257" cluster
	I1009 20:21:47.452638  505641 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:21:47.455508  505641 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:21:47.458251  505641 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:21:47.458312  505641 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 20:21:47.458325  505641 cache.go:58] Caching tarball of preloaded images
	I1009 20:21:47.458336  505641 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:21:47.458406  505641 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 20:21:47.458416  505641 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 20:21:47.458536  505641 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/config.json ...
	I1009 20:21:47.478231  505641 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:21:47.478255  505641 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:21:47.478273  505641 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:21:47.478297  505641 start.go:361] acquireMachinesLock for newest-cni-160257: {Name:mkab4aa92a505aec53d4bce517e62dd4f38ff19e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:21:47.478362  505641 start.go:365] duration metric: took 36.932µs to acquireMachinesLock for "newest-cni-160257"
	I1009 20:21:47.478381  505641 start.go:97] Skipping create...Using existing machine configuration
	I1009 20:21:47.478392  505641 fix.go:55] fixHost starting: 
	I1009 20:21:47.478676  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:47.498453  505641 fix.go:113] recreateIfNeeded on newest-cni-160257: state=Stopped err=<nil>
	W1009 20:21:47.498498  505641 fix.go:139] unexpected machine state, will restart: <nil>
	W1009 20:21:46.391695  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	W1009 20:21:48.892695  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	I1009 20:21:47.501793  505641 out.go:252] * Restarting existing docker container for "newest-cni-160257" ...
	I1009 20:21:47.501900  505641 cli_runner.go:164] Run: docker start newest-cni-160257
	I1009 20:21:47.774834  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:47.798720  505641 kic.go:430] container "newest-cni-160257" state is running.
	I1009 20:21:47.799854  505641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-160257
	I1009 20:21:47.826771  505641 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/config.json ...
	I1009 20:21:47.827119  505641 machine.go:93] provisionDockerMachine start ...
	I1009 20:21:47.827296  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:47.853865  505641 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:47.854202  505641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1009 20:21:47.854217  505641 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:21:47.854805  505641 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37994->127.0.0.1:33461: read: connection reset by peer
	I1009 20:21:51.009336  505641 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-160257
	
	I1009 20:21:51.009371  505641 ubuntu.go:182] provisioning hostname "newest-cni-160257"
	I1009 20:21:51.009454  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:51.028512  505641 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:51.028834  505641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1009 20:21:51.028846  505641 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-160257 && echo "newest-cni-160257" | sudo tee /etc/hostname
	I1009 20:21:51.195197  505641 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-160257
	
	I1009 20:21:51.195290  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:51.214100  505641 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:51.214407  505641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1009 20:21:51.214425  505641 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-160257' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-160257/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-160257' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:21:51.385834  505641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:21:51.385861  505641 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:21:51.385883  505641 ubuntu.go:190] setting up certificates
	I1009 20:21:51.385893  505641 provision.go:84] configureAuth start
	I1009 20:21:51.385966  505641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-160257
	I1009 20:21:51.410176  505641 provision.go:143] copyHostCerts
	I1009 20:21:51.410244  505641 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:21:51.410265  505641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:21:51.410352  505641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:21:51.410465  505641 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:21:51.410477  505641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:21:51.410505  505641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:21:51.410574  505641 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:21:51.410588  505641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:21:51.410615  505641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:21:51.410679  505641 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.newest-cni-160257 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-160257]
	I1009 20:21:51.863331  505641 provision.go:177] copyRemoteCerts
	I1009 20:21:51.863402  505641 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:21:51.863461  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:51.880865  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:51.993391  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:21:52.015149  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:21:52.036074  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:21:52.055481  505641 provision.go:87] duration metric: took 669.560055ms to configureAuth
	I1009 20:21:52.055507  505641 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:21:52.055721  505641 config.go:182] Loaded profile config "newest-cni-160257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:21:52.055831  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:52.074474  505641 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:52.074899  505641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1009 20:21:52.074924  505641 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:21:52.387858  505641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:21:52.387888  505641 machine.go:96] duration metric: took 4.560754136s to provisionDockerMachine
	I1009 20:21:52.387899  505641 start.go:294] postStartSetup for "newest-cni-160257" (driver="docker")
	I1009 20:21:52.387910  505641 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:21:52.387969  505641 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:21:52.388017  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:52.412217  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:52.526292  505641 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:21:52.530482  505641 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:21:52.530553  505641 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:21:52.530579  505641 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:21:52.530667  505641 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:21:52.530782  505641 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:21:52.530942  505641 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:21:52.540673  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:21:52.563308  505641 start.go:297] duration metric: took 175.393484ms for postStartSetup
	I1009 20:21:52.563461  505641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:21:52.563528  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:52.580881  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:52.682184  505641 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:21:52.686923  505641 fix.go:57] duration metric: took 5.208523021s for fixHost
	I1009 20:21:52.686956  505641 start.go:84] releasing machines lock for "newest-cni-160257", held for 5.208584913s
	I1009 20:21:52.687039  505641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-160257
	I1009 20:21:52.704600  505641 ssh_runner.go:195] Run: cat /version.json
	I1009 20:21:52.704636  505641 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:21:52.704650  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:52.704689  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:52.732613  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:52.746816  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:52.956865  505641 ssh_runner.go:195] Run: systemctl --version
	I1009 20:21:52.963451  505641 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:21:53.007462  505641 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:21:53.013409  505641 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:21:53.013495  505641 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:21:53.022944  505641 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 20:21:53.022975  505641 start.go:496] detecting cgroup driver to use...
	I1009 20:21:53.023044  505641 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 20:21:53.023144  505641 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:21:53.038534  505641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:21:53.052168  505641 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:21:53.052276  505641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:21:53.068820  505641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:21:53.083319  505641 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:21:53.207043  505641 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:21:53.336124  505641 docker.go:234] disabling docker service ...
	I1009 20:21:53.336193  505641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:21:53.352807  505641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:21:53.366353  505641 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:21:53.489904  505641 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:21:53.611315  505641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:21:53.625318  505641 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:21:53.641664  505641 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:21:53.641780  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.651576  505641 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:21:53.651646  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.662547  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.671928  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.681650  505641 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:21:53.690299  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.699940  505641 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.709708  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.719273  505641 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:21:53.728115  505641 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:21:53.736223  505641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:53.859445  505641 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:21:54.009289  505641 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:21:54.009385  505641 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:21:54.014669  505641 start.go:564] Will wait 60s for crictl version
	I1009 20:21:54.014762  505641 ssh_runner.go:195] Run: which crictl
	I1009 20:21:54.018778  505641 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:21:54.044923  505641 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:21:54.045032  505641 ssh_runner.go:195] Run: crio --version
	I1009 20:21:54.077028  505641 ssh_runner.go:195] Run: crio --version
	I1009 20:21:54.111747  505641 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 20:21:54.114671  505641 cli_runner.go:164] Run: docker network inspect newest-cni-160257 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:21:54.130922  505641 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 20:21:54.134873  505641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:21:54.150070  505641 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1009 20:21:50.893834  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	W1009 20:21:53.392803  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	I1009 20:21:54.393013  500265 pod_ready.go:94] pod "coredns-66bc5c9577-4c2vb" is "Ready"
	I1009 20:21:54.393038  500265 pod_ready.go:86] duration metric: took 32.006387261s for pod "coredns-66bc5c9577-4c2vb" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.396978  500265 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.403052  500265 pod_ready.go:94] pod "etcd-default-k8s-diff-port-417984" is "Ready"
	I1009 20:21:54.403075  500265 pod_ready.go:86] duration metric: took 6.075564ms for pod "etcd-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.406444  500265 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.412180  500265 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-417984" is "Ready"
	I1009 20:21:54.412203  500265 pod_ready.go:86] duration metric: took 5.733758ms for pod "kube-apiserver-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.415033  500265 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.152910  505641 kubeadm.go:883] updating cluster {Name:newest-cni-160257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:21:54.153074  505641 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:21:54.153263  505641 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:21:54.201986  505641 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:21:54.202008  505641 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:21:54.202092  505641 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:21:54.230381  505641 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:21:54.230446  505641 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:21:54.230479  505641 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 20:21:54.230592  505641 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-160257 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:21:54.230706  505641 ssh_runner.go:195] Run: crio config
	I1009 20:21:54.302972  505641 cni.go:84] Creating CNI manager for ""
	I1009 20:21:54.303001  505641 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:21:54.303044  505641 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1009 20:21:54.303084  505641 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-160257 NodeName:newest-cni-160257 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:21:54.303315  505641 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-160257"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:21:54.303416  505641 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:21:54.315581  505641 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:21:54.315723  505641 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:21:54.323840  505641 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 20:21:54.344587  505641 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:21:54.359645  505641 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1009 20:21:54.373013  505641 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:21:54.376761  505641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:21:54.386749  505641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:54.517399  505641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:21:54.534719  505641 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257 for IP: 192.168.76.2
	I1009 20:21:54.534737  505641 certs.go:195] generating shared ca certs ...
	I1009 20:21:54.534761  505641 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:54.534896  505641 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:21:54.534936  505641 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:21:54.534943  505641 certs.go:257] generating profile certs ...
	I1009 20:21:54.535020  505641 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/client.key
	I1009 20:21:54.535080  505641 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.key.f76169c2
	I1009 20:21:54.535117  505641 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/proxy-client.key
	I1009 20:21:54.535227  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:21:54.535254  505641 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:21:54.535262  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:21:54.535293  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:21:54.535320  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:21:54.535341  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:21:54.535381  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:21:54.535945  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:21:54.556736  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:21:54.577440  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:21:54.598476  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:21:54.618936  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 20:21:54.641991  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:21:54.675026  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:21:54.699287  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:21:54.720084  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:21:54.749667  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:21:54.779863  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:21:54.802813  505641 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:21:54.820188  505641 ssh_runner.go:195] Run: openssl version
	I1009 20:21:54.828192  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:21:54.838549  505641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:21:54.844886  505641 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:21:54.845006  505641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:21:54.901921  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:21:54.911134  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:21:54.919928  505641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:21:54.924078  505641 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:21:54.924152  505641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:21:54.965535  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:21:54.974008  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:21:54.983672  505641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:21:54.987949  505641 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:21:54.988063  505641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:21:55.032343  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:21:55.042132  505641 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:21:55.047524  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:21:55.091675  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:21:55.150741  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:21:55.227909  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:21:55.337070  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:21:55.407552  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:21:55.474195  505641 kubeadm.go:400] StartCluster: {Name:newest-cni-160257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:21:55.474304  505641 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:21:55.474430  505641 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:21:55.531354  505641 cri.go:89] found id: "c44d38264b4c8f90676162945fe05a02624867f517df88a401a8ae08e56998fc"
	I1009 20:21:55.531377  505641 cri.go:89] found id: "abf4f184374a54e8b81747413f26453de47bd605a2aeb2a0889c7f019dc40141"
	I1009 20:21:55.531383  505641 cri.go:89] found id: "8e9d85a685b554c78f90ad52ce9e2e08feb85c5a0c3c0cecaa44409529755644"
	I1009 20:21:55.531387  505641 cri.go:89] found id: "1bb5609884005775d5cb2c3c1d622130225e6c83a8497006aa8e75133f859524"
	I1009 20:21:55.531391  505641 cri.go:89] found id: ""
	I1009 20:21:55.531471  505641 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 20:21:55.544796  505641 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:21:55Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:21:55.544919  505641 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:21:55.554602  505641 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 20:21:55.554623  505641 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 20:21:55.554701  505641 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:21:55.564787  505641 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:21:55.565500  505641 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-160257" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:21:55.565859  505641 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-294150/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-160257" cluster setting kubeconfig missing "newest-cni-160257" context setting]
	I1009 20:21:55.566392  505641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:55.568289  505641 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:21:55.578851  505641 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1009 20:21:55.578897  505641 kubeadm.go:601] duration metric: took 24.267047ms to restartPrimaryControlPlane
	I1009 20:21:55.578907  505641 kubeadm.go:402] duration metric: took 104.740677ms to StartCluster
	I1009 20:21:55.578944  505641 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:55.579059  505641 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:21:55.580100  505641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:55.580385  505641 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:21:55.580788  505641 config.go:182] Loaded profile config "newest-cni-160257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:21:55.581007  505641 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:21:55.581158  505641 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-160257"
	I1009 20:21:55.581195  505641 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-160257"
	W1009 20:21:55.581214  505641 addons.go:247] addon storage-provisioner should already be in state true
	I1009 20:21:55.581264  505641 host.go:66] Checking if "newest-cni-160257" exists ...
	I1009 20:21:55.581895  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:55.582119  505641 addons.go:69] Setting dashboard=true in profile "newest-cni-160257"
	I1009 20:21:55.582152  505641 addons.go:238] Setting addon dashboard=true in "newest-cni-160257"
	W1009 20:21:55.582166  505641 addons.go:247] addon dashboard should already be in state true
	I1009 20:21:55.582202  505641 host.go:66] Checking if "newest-cni-160257" exists ...
	I1009 20:21:55.582545  505641 addons.go:69] Setting default-storageclass=true in profile "newest-cni-160257"
	I1009 20:21:55.582570  505641 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-160257"
	I1009 20:21:55.582699  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:55.582865  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:55.587168  505641 out.go:179] * Verifying Kubernetes components...
	I1009 20:21:55.590497  505641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:55.647593  505641 addons.go:238] Setting addon default-storageclass=true in "newest-cni-160257"
	W1009 20:21:55.647617  505641 addons.go:247] addon default-storageclass should already be in state true
	I1009 20:21:55.647642  505641 host.go:66] Checking if "newest-cni-160257" exists ...
	I1009 20:21:55.648040  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:55.651603  505641 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 20:21:55.654619  505641 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:21:55.657431  505641 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 20:21:54.590365  500265 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-417984" is "Ready"
	I1009 20:21:54.590394  500265 pod_ready.go:86] duration metric: took 175.292139ms for pod "kube-controller-manager-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.791524  500265 pod_ready.go:83] waiting for pod "kube-proxy-jnlzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:55.191059  500265 pod_ready.go:94] pod "kube-proxy-jnlzf" is "Ready"
	I1009 20:21:55.191086  500265 pod_ready.go:86] duration metric: took 399.520534ms for pod "kube-proxy-jnlzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:55.401649  500265 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:55.790702  500265 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-417984" is "Ready"
	I1009 20:21:55.790734  500265 pod_ready.go:86] duration metric: took 389.05888ms for pod "kube-scheduler-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:55.790747  500265 pod_ready.go:40] duration metric: took 33.412767938s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:21:55.908403  500265 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:21:55.912550  500265 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-417984" cluster and "default" namespace by default
	I1009 20:21:55.657478  505641 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:55.657494  505641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:21:55.657567  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:55.660364  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 20:21:55.660398  505641 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 20:21:55.660470  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:55.697377  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:55.699465  505641 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:55.699490  505641 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:21:55.699555  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:55.731795  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:55.737875  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:56.004907  505641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:21:56.076607  505641 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:21:56.076689  505641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:56.092264  505641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:56.114169  505641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:56.195392  505641 api_server.go:72] duration metric: took 614.960541ms to wait for apiserver process to appear ...
	I1009 20:21:56.195421  505641 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:21:56.195440  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:21:56.220796  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 20:21:56.220824  505641 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 20:21:56.317005  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 20:21:56.317034  505641 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 20:21:56.422522  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 20:21:56.422572  505641 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 20:21:56.531775  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 20:21:56.531796  505641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 20:21:56.547920  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 20:21:56.547943  505641 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 20:21:56.564206  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 20:21:56.564229  505641 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 20:21:56.579829  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 20:21:56.579853  505641 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 20:21:56.595478  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 20:21:56.595502  505641 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 20:21:56.610137  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 20:21:56.610170  505641 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 20:21:56.625847  505641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 20:22:00.620389  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:22:00.620423  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:22:00.620437  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:00.703423  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1009 20:22:00.703455  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1009 20:22:00.703471  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:00.748276  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1009 20:22:00.748306  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1009 20:22:00.909192  505641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.816888973s)
	I1009 20:22:01.196456  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:01.216282  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:22:01.216316  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:22:01.696494  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:01.739358  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:22:01.739386  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:22:02.194638  505641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.568746238s)
	I1009 20:22:02.194890  505641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.080695838s)
	I1009 20:22:02.195688  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:02.197926  505641 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-160257 addons enable metrics-server
	
	I1009 20:22:02.200869  505641 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1009 20:22:02.203855  505641 addons.go:514] duration metric: took 6.622867491s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1009 20:22:02.205011  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:22:02.205039  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:22:02.695565  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:02.707753  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1009 20:22:02.710100  505641 api_server.go:141] control plane version: v1.34.1
	I1009 20:22:02.710127  505641 api_server.go:131] duration metric: took 6.514699514s to wait for apiserver health ...
	I1009 20:22:02.710137  505641 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:02.720979  505641 system_pods.go:59] 8 kube-system pods found
	I1009 20:22:02.721011  505641 system_pods.go:61] "coredns-66bc5c9577-h6jjt" [48d28596-1503-4675-b84d-a0770eea0d66] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1009 20:22:02.721020  505641 system_pods.go:61] "etcd-newest-cni-160257" [7c59b451-dfcc-492f-a84f-2b02319332fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:22:02.721029  505641 system_pods.go:61] "kindnet-bgspl" [d8f6a466-a843-4773-968c-86550cdbe807] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1009 20:22:02.721038  505641 system_pods.go:61] "kube-apiserver-newest-cni-160257" [12beea36-feb5-44e6-8093-e6627a7c0bc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:22:02.721046  505641 system_pods.go:61] "kube-controller-manager-newest-cni-160257" [d721fd3e-4510-4c9d-8156-1389f2c157e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:22:02.721053  505641 system_pods.go:61] "kube-proxy-q5mpb" [efd41b4d-05f4-4870-b04c-cca5ec803e68] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 20:22:02.721066  505641 system_pods.go:61] "kube-scheduler-newest-cni-160257" [80050cec-2104-4888-a8e1-611f33e21d87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:22:02.721085  505641 system_pods.go:61] "storage-provisioner" [d17148c8-3517-4026-aa73-4a1705edbddf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1009 20:22:02.721095  505641 system_pods.go:74] duration metric: took 10.948946ms to wait for pod list to return data ...
	I1009 20:22:02.721104  505641 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:02.730183  505641 default_sa.go:45] found service account: "default"
	I1009 20:22:02.730206  505641 default_sa.go:55] duration metric: took 9.07643ms for default service account to be created ...
	I1009 20:22:02.730219  505641 kubeadm.go:586] duration metric: took 7.149792386s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 20:22:02.730236  505641 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:02.742358  505641 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:22:02.742448  505641 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:02.742476  505641 node_conditions.go:105] duration metric: took 12.233114ms to run NodePressure ...
	I1009 20:22:02.742518  505641 start.go:242] waiting for startup goroutines ...
	I1009 20:22:02.742544  505641 start.go:247] waiting for cluster config update ...
	I1009 20:22:02.742587  505641 start.go:256] writing updated cluster config ...
	I1009 20:22:02.742967  505641 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:02.844541  505641 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:22:02.849754  505641 out.go:179] * Done! kubectl is now configured to use "newest-cni-160257" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.538063206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.541460336Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-q5mpb/POD" id=a1401b41-f5ed-4c8e-8476-ddbfa8bc0814 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.541523779Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.545398011Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a1401b41-f5ed-4c8e-8476-ddbfa8bc0814 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.546472461Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cb15f1b7-1753-4f61-beff-1b42a5542cc4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.549752937Z" level=info msg="Ran pod sandbox d037f519c51da9872e77f2e0881a7ccf6f5af218cfc8558010479b59dfb84f54 with infra container: kube-system/kube-proxy-q5mpb/POD" id=a1401b41-f5ed-4c8e-8476-ddbfa8bc0814 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.554135556Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e81109eb-915b-4f7b-896b-e7d92b6bd7a8 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.554571385Z" level=info msg="Ran pod sandbox 2a97c8ad885d0122325e63bae0911cbe824f9686ea71550a710b1c76e2d5ffd6 with infra container: kube-system/kindnet-bgspl/POD" id=cb15f1b7-1753-4f61-beff-1b42a5542cc4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.559815472Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c560ee79-1477-47dc-9f5e-f483b7260112 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.560174213Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=3032bb72-2fa8-4480-a6d2-191f1b8c29f6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.563540819Z" level=info msg="Creating container: kube-system/kube-proxy-q5mpb/kube-proxy" id=2e8adef3-cd08-4160-b101-2c6e1070d9c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.564080773Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.565672702Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=254f716d-dbed-4d4b-be45-477347970620 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.567135744Z" level=info msg="Creating container: kube-system/kindnet-bgspl/kindnet-cni" id=d62a8b5d-b27b-44d6-965a-5eec3e28fb02 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.567382705Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.58648313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.587059318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.590895009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.59270318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.631570106Z" level=info msg="Created container ce12541f225c961a2d52993ca9aeb9b77f4d1c3c1d6fd17ff705960d66604ae6: kube-system/kindnet-bgspl/kindnet-cni" id=d62a8b5d-b27b-44d6-965a-5eec3e28fb02 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.63248686Z" level=info msg="Starting container: ce12541f225c961a2d52993ca9aeb9b77f4d1c3c1d6fd17ff705960d66604ae6" id=05f7fc9b-b594-4af1-93b6-078306ce56d9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.635134138Z" level=info msg="Created container e69f5d91a1b96da308ea7785822c1e647575c92e17d9a695bf51f239b3fc3ccd: kube-system/kube-proxy-q5mpb/kube-proxy" id=2e8adef3-cd08-4160-b101-2c6e1070d9c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.637864347Z" level=info msg="Starting container: e69f5d91a1b96da308ea7785822c1e647575c92e17d9a695bf51f239b3fc3ccd" id=2b61086a-949a-4a10-908e-a0ffa509b000 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.639135419Z" level=info msg="Started container" PID=1063 containerID=ce12541f225c961a2d52993ca9aeb9b77f4d1c3c1d6fd17ff705960d66604ae6 description=kube-system/kindnet-bgspl/kindnet-cni id=05f7fc9b-b594-4af1-93b6-078306ce56d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2a97c8ad885d0122325e63bae0911cbe824f9686ea71550a710b1c76e2d5ffd6
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.64268117Z" level=info msg="Started container" PID=1064 containerID=e69f5d91a1b96da308ea7785822c1e647575c92e17d9a695bf51f239b3fc3ccd description=kube-system/kube-proxy-q5mpb/kube-proxy id=2b61086a-949a-4a10-908e-a0ffa509b000 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d037f519c51da9872e77f2e0881a7ccf6f5af218cfc8558010479b59dfb84f54
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ce12541f225c9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 seconds ago       Running             kindnet-cni               1                   2a97c8ad885d0       kindnet-bgspl                               kube-system
	e69f5d91a1b96       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 seconds ago       Running             kube-proxy                1                   d037f519c51da       kube-proxy-q5mpb                            kube-system
	c44d38264b4c8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago      Running             kube-controller-manager   1                   43399bda551b8       kube-controller-manager-newest-cni-160257   kube-system
	abf4f184374a5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago      Running             kube-apiserver            1                   0687f443300e9       kube-apiserver-newest-cni-160257            kube-system
	8e9d85a685b55       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago      Running             etcd                      1                   7f3a95c98e0e6       etcd-newest-cni-160257                      kube-system
	1bb5609884005       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago      Running             kube-scheduler            1                   c9a7126ff6c3a       kube-scheduler-newest-cni-160257            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-160257
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-160257
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=newest-cni-160257
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T20_21_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 20:21:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-160257
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 20:22:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 20:22:00 +0000   Thu, 09 Oct 2025 20:21:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 20:22:00 +0000   Thu, 09 Oct 2025 20:21:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 20:22:00 +0000   Thu, 09 Oct 2025 20:21:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 09 Oct 2025 20:22:00 +0000   Thu, 09 Oct 2025 20:21:27 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-160257
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 02c511aadbd449108e4b8c7226050824
	  System UUID:                0382347f-ca4b-4cf8-b386-5e98e49e227d
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-160257                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-bgspl                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-160257             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-160257    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-q5mpb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-160257             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 24s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  42s (x8 over 42s)  kubelet          Node newest-cni-160257 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet          Node newest-cni-160257 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     42s (x8 over 42s)  kubelet          Node newest-cni-160257 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-160257 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node newest-cni-160257 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     32s                kubelet          Node newest-cni-160257 status is now: NodeHasSufficientPID
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           27s                node-controller  Node newest-cni-160257 event: Registered Node newest-cni-160257 in Controller
	  Normal   Starting                 13s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-160257 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-160257 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-160257 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-160257 event: Registered Node newest-cni-160257 in Controller
	
	
	==> dmesg <==
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:04] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:15] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:16] overlayfs: idmapped layers are currently not supported
	[ +23.810739] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:18] overlayfs: idmapped layers are currently not supported
	[ +26.082927] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:19] overlayfs: idmapped layers are currently not supported
	[ +21.956614] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:21] overlayfs: idmapped layers are currently not supported
	[ +16.062221] overlayfs: idmapped layers are currently not supported
	[ +28.876478] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8e9d85a685b554c78f90ad52ce9e2e08feb85c5a0c3c0cecaa44409529755644] <==
	{"level":"warn","ts":"2025-10-09T20:21:58.939460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:58.985516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.060525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.099510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.117387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.161591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.201275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.222208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.268209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.307172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.310041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.321565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.344181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.375240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.382142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.398184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.415492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.454916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.485281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.501225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.531892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.553245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.576732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.601786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.686722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57216","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:22:07 up  3:04,  0 user,  load average: 3.96, 3.44, 2.39
	Linux newest-cni-160257 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ce12541f225c961a2d52993ca9aeb9b77f4d1c3c1d6fd17ff705960d66604ae6] <==
	I1009 20:22:02.821472       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:22:02.821984       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1009 20:22:02.822136       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:22:02.822176       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:22:02.822218       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:22:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:22:03.016908       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:22:03.017031       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:22:03.017068       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:22:03.018236       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [abf4f184374a54e8b81747413f26453de47bd605a2aeb2a0889c7f019dc40141] <==
	I1009 20:22:00.740095       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1009 20:22:00.794417       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 20:22:00.809228       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1009 20:22:00.814626       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1009 20:22:00.814653       1 policy_source.go:240] refreshing policies
	I1009 20:22:00.814946       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:22:00.828921       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 20:22:00.836809       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 20:22:00.836840       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 20:22:00.836965       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1009 20:22:00.838665       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1009 20:22:00.871928       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 20:22:01.546741       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:22:01.778256       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 20:22:01.903445       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 20:22:01.948991       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:22:01.962636       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:22:01.988367       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 20:22:02.078177       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.99.148"}
	I1009 20:22:02.105346       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.104.146"}
	I1009 20:22:04.112382       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 20:22:04.387764       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 20:22:04.387786       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 20:22:04.543669       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 20:22:04.598317       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [c44d38264b4c8f90676162945fe05a02624867f517df88a401a8ae08e56998fc] <==
	I1009 20:22:04.049599       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 20:22:04.049610       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 20:22:04.049616       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 20:22:04.057331       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1009 20:22:04.065555       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 20:22:04.065858       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1009 20:22:04.075929       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 20:22:04.078982       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 20:22:04.082502       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 20:22:04.082617       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1009 20:22:04.083594       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:22:04.083662       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 20:22:04.084835       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 20:22:04.085356       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 20:22:04.086595       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 20:22:04.088131       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 20:22:04.090969       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1009 20:22:04.091153       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1009 20:22:04.091282       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1009 20:22:04.093929       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 20:22:04.096629       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 20:22:04.097839       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 20:22:04.101218       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 20:22:04.102827       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 20:22:04.108936       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [e69f5d91a1b96da308ea7785822c1e647575c92e17d9a695bf51f239b3fc3ccd] <==
	I1009 20:22:02.719471       1 server_linux.go:53] "Using iptables proxy"
	I1009 20:22:02.851320       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 20:22:02.967883       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 20:22:02.967927       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1009 20:22:02.968016       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:22:03.007689       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:22:03.007767       1 server_linux.go:132] "Using iptables Proxier"
	I1009 20:22:03.026906       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:22:03.027332       1 server.go:527] "Version info" version="v1.34.1"
	I1009 20:22:03.027546       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:22:03.029377       1 config.go:200] "Starting service config controller"
	I1009 20:22:03.029455       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 20:22:03.029509       1 config.go:106] "Starting endpoint slice config controller"
	I1009 20:22:03.029554       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 20:22:03.029592       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 20:22:03.029631       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 20:22:03.030935       1 config.go:309] "Starting node config controller"
	I1009 20:22:03.031024       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 20:22:03.031080       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 20:22:03.129847       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 20:22:03.129963       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 20:22:03.129994       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1bb5609884005775d5cb2c3c1d622130225e6c83a8497006aa8e75133f859524] <==
	I1009 20:21:58.771959       1 serving.go:386] Generated self-signed cert in-memory
	I1009 20:22:01.667606       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 20:22:01.670253       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:22:01.706298       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 20:22:01.706382       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 20:22:01.706404       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 20:22:01.706427       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 20:22:01.724039       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:22:01.724075       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:22:01.724102       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:22:01.724108       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:22:01.809459       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 20:22:01.825599       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:22:01.825742       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: I1009 20:22:00.863944     727 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-160257"
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: I1009 20:22:00.863973     727 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: I1009 20:22:00.864712     727 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: E1009 20:22:00.934393     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-160257\" already exists" pod="kube-system/kube-scheduler-newest-cni-160257"
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: I1009 20:22:00.934444     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-160257"
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: E1009 20:22:00.950362     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-160257\" already exists" pod="kube-system/etcd-newest-cni-160257"
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: I1009 20:22:00.950413     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-160257"
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: E1009 20:22:00.988701     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-160257\" already exists" pod="kube-system/kube-apiserver-newest-cni-160257"
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: I1009 20:22:00.988844     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-160257"
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: E1009 20:22:01.019244     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-160257\" already exists" pod="kube-system/kube-controller-manager-newest-cni-160257"
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: E1009 20:22:01.793610     727 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: E1009 20:22:01.794185     727 projected.go:196] Error preparing data for projected volume kube-api-access-tm542 for pod kube-system/kube-proxy-q5mpb: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:newest-cni-160257" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-160257' and this object
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found], failed to sync configmap cache: timed out waiting for the condition]
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: E1009 20:22:01.794305     727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/efd41b4d-05f4-4870-b04c-cca5ec803e68-kube-api-access-tm542 podName:efd41b4d-05f4-4870-b04c-cca5ec803e68 nodeName:}" failed. No retries permitted until 2025-10-09 20:22:02.294276391 +0000 UTC m=+7.757576730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tm542" (UniqueName: "kubernetes.io/projected/efd41b4d-05f4-4870-b04c-cca5ec803e68-kube-api-access-tm542") pod "kube-proxy-q5mpb" (UID: "efd41b4d-05f4-4870-b04c-cca5ec803e68") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:newest-cni-160257" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-160257' and this object
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found], failed to sync configmap cache: timed out waiting for the condition]
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: E1009 20:22:01.794128     727 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: E1009 20:22:01.794904     727 projected.go:196] Error preparing data for projected volume kube-api-access-v8lx2 for pod kube-system/kindnet-bgspl: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:newest-cni-160257" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-160257' and this object
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found], failed to sync configmap cache: timed out waiting for the condition]
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: E1009 20:22:01.794987     727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8f6a466-a843-4773-968c-86550cdbe807-kube-api-access-v8lx2 podName:d8f6a466-a843-4773-968c-86550cdbe807 nodeName:}" failed. No retries permitted until 2025-10-09 20:22:02.294969544 +0000 UTC m=+7.758269883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v8lx2" (UniqueName: "kubernetes.io/projected/d8f6a466-a843-4773-968c-86550cdbe807-kube-api-access-v8lx2") pod "kindnet-bgspl" (UID: "d8f6a466-a843-4773-968c-86550cdbe807") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:newest-cni-160257" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-160257' and this object
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found], failed to sync configmap cache: timed out waiting for the condition]
	Oct 09 20:22:02 newest-cni-160257 kubelet[727]: I1009 20:22:02.315292     727 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 09 20:22:02 newest-cni-160257 kubelet[727]: W1009 20:22:02.557080     727 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7/crio-2a97c8ad885d0122325e63bae0911cbe824f9686ea71550a710b1c76e2d5ffd6 WatchSource:0}: Error finding container 2a97c8ad885d0122325e63bae0911cbe824f9686ea71550a710b1c76e2d5ffd6: Status 404 returned error can't find the container with id 2a97c8ad885d0122325e63bae0911cbe824f9686ea71550a710b1c76e2d5ffd6
	Oct 09 20:22:04 newest-cni-160257 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 20:22:04 newest-cni-160257 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 20:22:04 newest-cni-160257 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-160257 -n newest-cni-160257
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-160257 -n newest-cni-160257: exit status 2 (471.344549ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-160257 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-h6jjt storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rqbq8 kubernetes-dashboard-855c9754f9-dbw27
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-160257 describe pod coredns-66bc5c9577-h6jjt storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rqbq8 kubernetes-dashboard-855c9754f9-dbw27
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-160257 describe pod coredns-66bc5c9577-h6jjt storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rqbq8 kubernetes-dashboard-855c9754f9-dbw27: exit status 1 (105.407862ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-h6jjt" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-rqbq8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-dbw27" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-160257 describe pod coredns-66bc5c9577-h6jjt storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rqbq8 kubernetes-dashboard-855c9754f9-dbw27: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-160257
helpers_test.go:243: (dbg) docker inspect newest-cni-160257:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7",
	        "Created": "2025-10-09T20:21:08.350011602Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 505768,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:21:47.545091543Z",
	            "FinishedAt": "2025-10-09T20:21:46.373294825Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7/hosts",
	        "LogPath": "/var/lib/docker/containers/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7-json.log",
	        "Name": "/newest-cni-160257",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-160257:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-160257",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7",
	                "LowerDir": "/var/lib/docker/overlay2/c3f22559d24a79b75dffe8207445a15a01a15487326878a474489ec60730e13e-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c3f22559d24a79b75dffe8207445a15a01a15487326878a474489ec60730e13e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c3f22559d24a79b75dffe8207445a15a01a15487326878a474489ec60730e13e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c3f22559d24a79b75dffe8207445a15a01a15487326878a474489ec60730e13e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-160257",
	                "Source": "/var/lib/docker/volumes/newest-cni-160257/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-160257",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-160257",
	                "name.minikube.sigs.k8s.io": "newest-cni-160257",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d9106aab5038866665aec1d9cf1595beaf76e67fcf40a13bced8ad21a40fb654",
	            "SandboxKey": "/var/run/docker/netns/d9106aab5038",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-160257": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:d1:17:17:9c:f8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "870f78d9db06c84c4e340afedcf88d286d22b52c6864f8eefaae6f4f49447e55",
	                    "EndpointID": "c352e0783083d9ca3bbb1795bbe79f2fbde3bbaffdc19df66e814234184d5914",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-160257",
	                        "b09c68fc79ea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-160257 -n newest-cni-160257
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-160257 -n newest-cni-160257: exit status 2 (476.771767ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-160257 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-160257 logs -n 25: (1.312109658s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-020313                                                                                                                                                                                                                          │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ delete  │ -p no-preload-020313                                                                                                                                                                                                                          │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ delete  │ -p disable-driver-mounts-613966                                                                                                                                                                                                               │ disable-driver-mounts-613966 │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ start   │ -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-565110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │                     │
	│ stop    │ -p embed-certs-565110 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-565110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ start   │ -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-417984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-417984 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:20 UTC │
	│ image   │ embed-certs-565110 image list --format=json                                                                                                                                                                                                   │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:20 UTC │
	│ pause   │ -p embed-certs-565110 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-417984 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:20 UTC │
	│ delete  │ -p embed-certs-565110                                                                                                                                                                                                                         │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:21 UTC │
	│ start   │ -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:21 UTC │
	│ delete  │ -p embed-certs-565110                                                                                                                                                                                                                         │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ start   │ -p newest-cni-160257 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-160257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │                     │
	│ stop    │ -p newest-cni-160257 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ addons  │ enable dashboard -p newest-cni-160257 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ start   │ -p newest-cni-160257 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:22 UTC │
	│ image   │ newest-cni-160257 image list --format=json                                                                                                                                                                                                    │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:22 UTC │ 09 Oct 25 20:22 UTC │
	│ pause   │ -p newest-cni-160257 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:22 UTC │                     │
	│ image   │ default-k8s-diff-port-417984 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:22 UTC │ 09 Oct 25 20:22 UTC │
	│ pause   │ -p default-k8s-diff-port-417984 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:21:47
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:21:47.225777  505641 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:21:47.225898  505641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:21:47.225907  505641 out.go:374] Setting ErrFile to fd 2...
	I1009 20:21:47.225913  505641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:21:47.226184  505641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:21:47.226608  505641 out.go:368] Setting JSON to false
	I1009 20:21:47.227622  505641 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11047,"bootTime":1760030261,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:21:47.227724  505641 start.go:143] virtualization:  
	I1009 20:21:47.232792  505641 out.go:179] * [newest-cni-160257] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:21:47.236046  505641 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:21:47.236091  505641 notify.go:221] Checking for updates...
	I1009 20:21:47.244521  505641 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:21:47.247836  505641 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:21:47.252638  505641 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:21:47.255969  505641 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:21:47.259775  505641 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:21:47.263692  505641 config.go:182] Loaded profile config "newest-cni-160257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:21:47.264262  505641 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:21:47.300892  505641 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:21:47.301004  505641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:21:47.375444  505641 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:21:47.364461238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:21:47.375566  505641 docker.go:319] overlay module found
	I1009 20:21:47.378702  505641 out.go:179] * Using the docker driver based on existing profile
	I1009 20:21:47.381764  505641 start.go:309] selected driver: docker
	I1009 20:21:47.381788  505641 start.go:930] validating driver "docker" against &{Name:newest-cni-160257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:21:47.381896  505641 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:21:47.382683  505641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:21:47.444269  505641 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:21:47.43523216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:21:47.444618  505641 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 20:21:47.444655  505641 cni.go:84] Creating CNI manager for ""
	I1009 20:21:47.444713  505641 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:21:47.444753  505641 start.go:353] cluster config:
	{Name:newest-cni-160257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:21:47.449762  505641 out.go:179] * Starting "newest-cni-160257" primary control-plane node in "newest-cni-160257" cluster
	I1009 20:21:47.452638  505641 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:21:47.455508  505641 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:21:47.458251  505641 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:21:47.458312  505641 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 20:21:47.458325  505641 cache.go:58] Caching tarball of preloaded images
	I1009 20:21:47.458336  505641 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:21:47.458406  505641 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 20:21:47.458416  505641 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 20:21:47.458536  505641 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/config.json ...
	I1009 20:21:47.478231  505641 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:21:47.478255  505641 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:21:47.478273  505641 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:21:47.478297  505641 start.go:361] acquireMachinesLock for newest-cni-160257: {Name:mkab4aa92a505aec53d4bce517e62dd4f38ff19e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:21:47.478362  505641 start.go:365] duration metric: took 36.932µs to acquireMachinesLock for "newest-cni-160257"
	I1009 20:21:47.478381  505641 start.go:97] Skipping create...Using existing machine configuration
	I1009 20:21:47.478392  505641 fix.go:55] fixHost starting: 
	I1009 20:21:47.478676  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:47.498453  505641 fix.go:113] recreateIfNeeded on newest-cni-160257: state=Stopped err=<nil>
	W1009 20:21:47.498498  505641 fix.go:139] unexpected machine state, will restart: <nil>
	W1009 20:21:46.391695  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	W1009 20:21:48.892695  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	I1009 20:21:47.501793  505641 out.go:252] * Restarting existing docker container for "newest-cni-160257" ...
	I1009 20:21:47.501900  505641 cli_runner.go:164] Run: docker start newest-cni-160257
	I1009 20:21:47.774834  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:47.798720  505641 kic.go:430] container "newest-cni-160257" state is running.
	I1009 20:21:47.799854  505641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-160257
	I1009 20:21:47.826771  505641 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/config.json ...
	I1009 20:21:47.827119  505641 machine.go:93] provisionDockerMachine start ...
	I1009 20:21:47.827296  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:47.853865  505641 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:47.854202  505641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1009 20:21:47.854217  505641 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:21:47.854805  505641 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37994->127.0.0.1:33461: read: connection reset by peer
	I1009 20:21:51.009336  505641 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-160257
	
	I1009 20:21:51.009371  505641 ubuntu.go:182] provisioning hostname "newest-cni-160257"
	I1009 20:21:51.009454  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:51.028512  505641 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:51.028834  505641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1009 20:21:51.028846  505641 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-160257 && echo "newest-cni-160257" | sudo tee /etc/hostname
	I1009 20:21:51.195197  505641 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-160257
	
	I1009 20:21:51.195290  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:51.214100  505641 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:51.214407  505641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1009 20:21:51.214425  505641 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-160257' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-160257/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-160257' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:21:51.385834  505641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:21:51.385861  505641 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:21:51.385883  505641 ubuntu.go:190] setting up certificates
	I1009 20:21:51.385893  505641 provision.go:84] configureAuth start
	I1009 20:21:51.385966  505641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-160257
	I1009 20:21:51.410176  505641 provision.go:143] copyHostCerts
	I1009 20:21:51.410244  505641 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:21:51.410265  505641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:21:51.410352  505641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:21:51.410465  505641 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:21:51.410477  505641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:21:51.410505  505641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:21:51.410574  505641 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:21:51.410588  505641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:21:51.410615  505641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:21:51.410679  505641 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.newest-cni-160257 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-160257]
	I1009 20:21:51.863331  505641 provision.go:177] copyRemoteCerts
	I1009 20:21:51.863402  505641 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:21:51.863461  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:51.880865  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:51.993391  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:21:52.015149  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:21:52.036074  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:21:52.055481  505641 provision.go:87] duration metric: took 669.560055ms to configureAuth
	I1009 20:21:52.055507  505641 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:21:52.055721  505641 config.go:182] Loaded profile config "newest-cni-160257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:21:52.055831  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:52.074474  505641 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:52.074899  505641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1009 20:21:52.074924  505641 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:21:52.387858  505641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:21:52.387888  505641 machine.go:96] duration metric: took 4.560754136s to provisionDockerMachine
	I1009 20:21:52.387899  505641 start.go:294] postStartSetup for "newest-cni-160257" (driver="docker")
	I1009 20:21:52.387910  505641 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:21:52.387969  505641 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:21:52.388017  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:52.412217  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:52.526292  505641 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:21:52.530482  505641 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:21:52.530553  505641 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:21:52.530579  505641 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:21:52.530667  505641 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:21:52.530782  505641 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:21:52.530942  505641 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:21:52.540673  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:21:52.563308  505641 start.go:297] duration metric: took 175.393484ms for postStartSetup
	I1009 20:21:52.563461  505641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:21:52.563528  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:52.580881  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:52.682184  505641 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:21:52.686923  505641 fix.go:57] duration metric: took 5.208523021s for fixHost
	I1009 20:21:52.686956  505641 start.go:84] releasing machines lock for "newest-cni-160257", held for 5.208584913s
	I1009 20:21:52.687039  505641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-160257
	I1009 20:21:52.704600  505641 ssh_runner.go:195] Run: cat /version.json
	I1009 20:21:52.704636  505641 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:21:52.704650  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:52.704689  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:52.732613  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:52.746816  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:52.956865  505641 ssh_runner.go:195] Run: systemctl --version
	I1009 20:21:52.963451  505641 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:21:53.007462  505641 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:21:53.013409  505641 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:21:53.013495  505641 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:21:53.022944  505641 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 20:21:53.022975  505641 start.go:496] detecting cgroup driver to use...
	I1009 20:21:53.023044  505641 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 20:21:53.023144  505641 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:21:53.038534  505641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:21:53.052168  505641 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:21:53.052276  505641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:21:53.068820  505641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:21:53.083319  505641 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:21:53.207043  505641 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:21:53.336124  505641 docker.go:234] disabling docker service ...
	I1009 20:21:53.336193  505641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:21:53.352807  505641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:21:53.366353  505641 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:21:53.489904  505641 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:21:53.611315  505641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:21:53.625318  505641 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:21:53.641664  505641 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:21:53.641780  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.651576  505641 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:21:53.651646  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.662547  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.671928  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.681650  505641 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:21:53.690299  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.699940  505641 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.709708  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.719273  505641 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:21:53.728115  505641 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:21:53.736223  505641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:53.859445  505641 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:21:54.009289  505641 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:21:54.009385  505641 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:21:54.014669  505641 start.go:564] Will wait 60s for crictl version
	I1009 20:21:54.014762  505641 ssh_runner.go:195] Run: which crictl
	I1009 20:21:54.018778  505641 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:21:54.044923  505641 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:21:54.045032  505641 ssh_runner.go:195] Run: crio --version
	I1009 20:21:54.077028  505641 ssh_runner.go:195] Run: crio --version
	I1009 20:21:54.111747  505641 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 20:21:54.114671  505641 cli_runner.go:164] Run: docker network inspect newest-cni-160257 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:21:54.130922  505641 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 20:21:54.134873  505641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:21:54.150070  505641 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1009 20:21:50.893834  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	W1009 20:21:53.392803  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	I1009 20:21:54.393013  500265 pod_ready.go:94] pod "coredns-66bc5c9577-4c2vb" is "Ready"
	I1009 20:21:54.393038  500265 pod_ready.go:86] duration metric: took 32.006387261s for pod "coredns-66bc5c9577-4c2vb" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.396978  500265 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.403052  500265 pod_ready.go:94] pod "etcd-default-k8s-diff-port-417984" is "Ready"
	I1009 20:21:54.403075  500265 pod_ready.go:86] duration metric: took 6.075564ms for pod "etcd-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.406444  500265 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.412180  500265 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-417984" is "Ready"
	I1009 20:21:54.412203  500265 pod_ready.go:86] duration metric: took 5.733758ms for pod "kube-apiserver-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.415033  500265 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.152910  505641 kubeadm.go:883] updating cluster {Name:newest-cni-160257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:21:54.153074  505641 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:21:54.153263  505641 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:21:54.201986  505641 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:21:54.202008  505641 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:21:54.202092  505641 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:21:54.230381  505641 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:21:54.230446  505641 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:21:54.230479  505641 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 20:21:54.230592  505641 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-160257 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:21:54.230706  505641 ssh_runner.go:195] Run: crio config
	I1009 20:21:54.302972  505641 cni.go:84] Creating CNI manager for ""
	I1009 20:21:54.303001  505641 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:21:54.303044  505641 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1009 20:21:54.303084  505641 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-160257 NodeName:newest-cni-160257 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:21:54.303315  505641 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-160257"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:21:54.303416  505641 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:21:54.315581  505641 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:21:54.315723  505641 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:21:54.323840  505641 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 20:21:54.344587  505641 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:21:54.359645  505641 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1009 20:21:54.373013  505641 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:21:54.376761  505641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:21:54.386749  505641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:54.517399  505641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:21:54.534719  505641 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257 for IP: 192.168.76.2
	I1009 20:21:54.534737  505641 certs.go:195] generating shared ca certs ...
	I1009 20:21:54.534761  505641 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:54.534896  505641 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:21:54.534936  505641 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:21:54.534943  505641 certs.go:257] generating profile certs ...
	I1009 20:21:54.535020  505641 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/client.key
	I1009 20:21:54.535080  505641 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.key.f76169c2
	I1009 20:21:54.535117  505641 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/proxy-client.key
	I1009 20:21:54.535227  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:21:54.535254  505641 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:21:54.535262  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:21:54.535293  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:21:54.535320  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:21:54.535341  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:21:54.535381  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:21:54.535945  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:21:54.556736  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:21:54.577440  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:21:54.598476  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:21:54.618936  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 20:21:54.641991  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:21:54.675026  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:21:54.699287  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:21:54.720084  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:21:54.749667  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:21:54.779863  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:21:54.802813  505641 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:21:54.820188  505641 ssh_runner.go:195] Run: openssl version
	I1009 20:21:54.828192  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:21:54.838549  505641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:21:54.844886  505641 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:21:54.845006  505641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:21:54.901921  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:21:54.911134  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:21:54.919928  505641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:21:54.924078  505641 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:21:54.924152  505641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:21:54.965535  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:21:54.974008  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:21:54.983672  505641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:21:54.987949  505641 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:21:54.988063  505641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:21:55.032343  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:21:55.042132  505641 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:21:55.047524  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:21:55.091675  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:21:55.150741  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:21:55.227909  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:21:55.337070  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:21:55.407552  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:21:55.474195  505641 kubeadm.go:400] StartCluster: {Name:newest-cni-160257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:21:55.474304  505641 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:21:55.474430  505641 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:21:55.531354  505641 cri.go:89] found id: "c44d38264b4c8f90676162945fe05a02624867f517df88a401a8ae08e56998fc"
	I1009 20:21:55.531377  505641 cri.go:89] found id: "abf4f184374a54e8b81747413f26453de47bd605a2aeb2a0889c7f019dc40141"
	I1009 20:21:55.531383  505641 cri.go:89] found id: "8e9d85a685b554c78f90ad52ce9e2e08feb85c5a0c3c0cecaa44409529755644"
	I1009 20:21:55.531387  505641 cri.go:89] found id: "1bb5609884005775d5cb2c3c1d622130225e6c83a8497006aa8e75133f859524"
	I1009 20:21:55.531391  505641 cri.go:89] found id: ""
	I1009 20:21:55.531471  505641 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 20:21:55.544796  505641 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:21:55Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:21:55.544919  505641 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:21:55.554602  505641 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 20:21:55.554623  505641 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 20:21:55.554701  505641 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:21:55.564787  505641 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:21:55.565500  505641 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-160257" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:21:55.565859  505641 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-294150/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-160257" cluster setting kubeconfig missing "newest-cni-160257" context setting]
	I1009 20:21:55.566392  505641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:55.568289  505641 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:21:55.578851  505641 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1009 20:21:55.578897  505641 kubeadm.go:601] duration metric: took 24.267047ms to restartPrimaryControlPlane
	I1009 20:21:55.578907  505641 kubeadm.go:402] duration metric: took 104.740677ms to StartCluster
	I1009 20:21:55.578944  505641 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:55.579059  505641 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:21:55.580100  505641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:55.580385  505641 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:21:55.580788  505641 config.go:182] Loaded profile config "newest-cni-160257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:21:55.581007  505641 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:21:55.581158  505641 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-160257"
	I1009 20:21:55.581195  505641 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-160257"
	W1009 20:21:55.581214  505641 addons.go:247] addon storage-provisioner should already be in state true
	I1009 20:21:55.581264  505641 host.go:66] Checking if "newest-cni-160257" exists ...
	I1009 20:21:55.581895  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:55.582119  505641 addons.go:69] Setting dashboard=true in profile "newest-cni-160257"
	I1009 20:21:55.582152  505641 addons.go:238] Setting addon dashboard=true in "newest-cni-160257"
	W1009 20:21:55.582166  505641 addons.go:247] addon dashboard should already be in state true
	I1009 20:21:55.582202  505641 host.go:66] Checking if "newest-cni-160257" exists ...
	I1009 20:21:55.582545  505641 addons.go:69] Setting default-storageclass=true in profile "newest-cni-160257"
	I1009 20:21:55.582570  505641 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-160257"
	I1009 20:21:55.582699  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:55.582865  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:55.587168  505641 out.go:179] * Verifying Kubernetes components...
	I1009 20:21:55.590497  505641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:55.647593  505641 addons.go:238] Setting addon default-storageclass=true in "newest-cni-160257"
	W1009 20:21:55.647617  505641 addons.go:247] addon default-storageclass should already be in state true
	I1009 20:21:55.647642  505641 host.go:66] Checking if "newest-cni-160257" exists ...
	I1009 20:21:55.648040  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:55.651603  505641 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 20:21:55.654619  505641 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:21:55.657431  505641 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 20:21:54.590365  500265 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-417984" is "Ready"
	I1009 20:21:54.590394  500265 pod_ready.go:86] duration metric: took 175.292139ms for pod "kube-controller-manager-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.791524  500265 pod_ready.go:83] waiting for pod "kube-proxy-jnlzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:55.191059  500265 pod_ready.go:94] pod "kube-proxy-jnlzf" is "Ready"
	I1009 20:21:55.191086  500265 pod_ready.go:86] duration metric: took 399.520534ms for pod "kube-proxy-jnlzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:55.401649  500265 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:55.790702  500265 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-417984" is "Ready"
	I1009 20:21:55.790734  500265 pod_ready.go:86] duration metric: took 389.05888ms for pod "kube-scheduler-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:55.790747  500265 pod_ready.go:40] duration metric: took 33.412767938s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:21:55.908403  500265 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:21:55.912550  500265 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-417984" cluster and "default" namespace by default
	I1009 20:21:55.657478  505641 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:55.657494  505641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:21:55.657567  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:55.660364  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 20:21:55.660398  505641 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 20:21:55.660470  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:55.697377  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:55.699465  505641 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:55.699490  505641 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:21:55.699555  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:55.731795  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:55.737875  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:56.004907  505641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:21:56.076607  505641 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:21:56.076689  505641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:56.092264  505641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:56.114169  505641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:56.195392  505641 api_server.go:72] duration metric: took 614.960541ms to wait for apiserver process to appear ...
	I1009 20:21:56.195421  505641 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:21:56.195440  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:21:56.220796  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 20:21:56.220824  505641 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 20:21:56.317005  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 20:21:56.317034  505641 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 20:21:56.422522  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 20:21:56.422572  505641 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 20:21:56.531775  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 20:21:56.531796  505641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 20:21:56.547920  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 20:21:56.547943  505641 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 20:21:56.564206  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 20:21:56.564229  505641 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 20:21:56.579829  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 20:21:56.579853  505641 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 20:21:56.595478  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 20:21:56.595502  505641 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 20:21:56.610137  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 20:21:56.610170  505641 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 20:21:56.625847  505641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 20:22:00.620389  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:22:00.620423  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:22:00.620437  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:00.703423  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1009 20:22:00.703455  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1009 20:22:00.703471  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:00.748276  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1009 20:22:00.748306  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1009 20:22:00.909192  505641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.816888973s)
	I1009 20:22:01.196456  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:01.216282  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:22:01.216316  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:22:01.696494  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:01.739358  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:22:01.739386  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:22:02.194638  505641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.568746238s)
	I1009 20:22:02.194890  505641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.080695838s)
	I1009 20:22:02.195688  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:02.197926  505641 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-160257 addons enable metrics-server
	
	I1009 20:22:02.200869  505641 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1009 20:22:02.203855  505641 addons.go:514] duration metric: took 6.622867491s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1009 20:22:02.205011  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:22:02.205039  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:22:02.695565  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:02.707753  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1009 20:22:02.710100  505641 api_server.go:141] control plane version: v1.34.1
	I1009 20:22:02.710127  505641 api_server.go:131] duration metric: took 6.514699514s to wait for apiserver health ...
	I1009 20:22:02.710137  505641 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:02.720979  505641 system_pods.go:59] 8 kube-system pods found
	I1009 20:22:02.721011  505641 system_pods.go:61] "coredns-66bc5c9577-h6jjt" [48d28596-1503-4675-b84d-a0770eea0d66] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1009 20:22:02.721020  505641 system_pods.go:61] "etcd-newest-cni-160257" [7c59b451-dfcc-492f-a84f-2b02319332fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:22:02.721029  505641 system_pods.go:61] "kindnet-bgspl" [d8f6a466-a843-4773-968c-86550cdbe807] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1009 20:22:02.721038  505641 system_pods.go:61] "kube-apiserver-newest-cni-160257" [12beea36-feb5-44e6-8093-e6627a7c0bc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:22:02.721046  505641 system_pods.go:61] "kube-controller-manager-newest-cni-160257" [d721fd3e-4510-4c9d-8156-1389f2c157e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:22:02.721053  505641 system_pods.go:61] "kube-proxy-q5mpb" [efd41b4d-05f4-4870-b04c-cca5ec803e68] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 20:22:02.721066  505641 system_pods.go:61] "kube-scheduler-newest-cni-160257" [80050cec-2104-4888-a8e1-611f33e21d87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:22:02.721085  505641 system_pods.go:61] "storage-provisioner" [d17148c8-3517-4026-aa73-4a1705edbddf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1009 20:22:02.721095  505641 system_pods.go:74] duration metric: took 10.948946ms to wait for pod list to return data ...
	I1009 20:22:02.721104  505641 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:02.730183  505641 default_sa.go:45] found service account: "default"
	I1009 20:22:02.730206  505641 default_sa.go:55] duration metric: took 9.07643ms for default service account to be created ...
	I1009 20:22:02.730219  505641 kubeadm.go:586] duration metric: took 7.149792386s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 20:22:02.730236  505641 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:02.742358  505641 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:22:02.742448  505641 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:02.742476  505641 node_conditions.go:105] duration metric: took 12.233114ms to run NodePressure ...
	I1009 20:22:02.742518  505641 start.go:242] waiting for startup goroutines ...
	I1009 20:22:02.742544  505641 start.go:247] waiting for cluster config update ...
	I1009 20:22:02.742587  505641 start.go:256] writing updated cluster config ...
	I1009 20:22:02.742967  505641 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:02.844541  505641 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:22:02.849754  505641 out.go:179] * Done! kubectl is now configured to use "newest-cni-160257" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.538063206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.541460336Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-q5mpb/POD" id=a1401b41-f5ed-4c8e-8476-ddbfa8bc0814 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.541523779Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.545398011Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a1401b41-f5ed-4c8e-8476-ddbfa8bc0814 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.546472461Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cb15f1b7-1753-4f61-beff-1b42a5542cc4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.549752937Z" level=info msg="Ran pod sandbox d037f519c51da9872e77f2e0881a7ccf6f5af218cfc8558010479b59dfb84f54 with infra container: kube-system/kube-proxy-q5mpb/POD" id=a1401b41-f5ed-4c8e-8476-ddbfa8bc0814 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.554135556Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e81109eb-915b-4f7b-896b-e7d92b6bd7a8 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.554571385Z" level=info msg="Ran pod sandbox 2a97c8ad885d0122325e63bae0911cbe824f9686ea71550a710b1c76e2d5ffd6 with infra container: kube-system/kindnet-bgspl/POD" id=cb15f1b7-1753-4f61-beff-1b42a5542cc4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.559815472Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c560ee79-1477-47dc-9f5e-f483b7260112 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.560174213Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=3032bb72-2fa8-4480-a6d2-191f1b8c29f6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.563540819Z" level=info msg="Creating container: kube-system/kube-proxy-q5mpb/kube-proxy" id=2e8adef3-cd08-4160-b101-2c6e1070d9c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.564080773Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.565672702Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=254f716d-dbed-4d4b-be45-477347970620 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.567135744Z" level=info msg="Creating container: kube-system/kindnet-bgspl/kindnet-cni" id=d62a8b5d-b27b-44d6-965a-5eec3e28fb02 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.567382705Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.58648313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.587059318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.590895009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.59270318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.631570106Z" level=info msg="Created container ce12541f225c961a2d52993ca9aeb9b77f4d1c3c1d6fd17ff705960d66604ae6: kube-system/kindnet-bgspl/kindnet-cni" id=d62a8b5d-b27b-44d6-965a-5eec3e28fb02 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.63248686Z" level=info msg="Starting container: ce12541f225c961a2d52993ca9aeb9b77f4d1c3c1d6fd17ff705960d66604ae6" id=05f7fc9b-b594-4af1-93b6-078306ce56d9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.635134138Z" level=info msg="Created container e69f5d91a1b96da308ea7785822c1e647575c92e17d9a695bf51f239b3fc3ccd: kube-system/kube-proxy-q5mpb/kube-proxy" id=2e8adef3-cd08-4160-b101-2c6e1070d9c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.637864347Z" level=info msg="Starting container: e69f5d91a1b96da308ea7785822c1e647575c92e17d9a695bf51f239b3fc3ccd" id=2b61086a-949a-4a10-908e-a0ffa509b000 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.639135419Z" level=info msg="Started container" PID=1063 containerID=ce12541f225c961a2d52993ca9aeb9b77f4d1c3c1d6fd17ff705960d66604ae6 description=kube-system/kindnet-bgspl/kindnet-cni id=05f7fc9b-b594-4af1-93b6-078306ce56d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2a97c8ad885d0122325e63bae0911cbe824f9686ea71550a710b1c76e2d5ffd6
	Oct 09 20:22:02 newest-cni-160257 crio[611]: time="2025-10-09T20:22:02.64268117Z" level=info msg="Started container" PID=1064 containerID=e69f5d91a1b96da308ea7785822c1e647575c92e17d9a695bf51f239b3fc3ccd description=kube-system/kube-proxy-q5mpb/kube-proxy id=2b61086a-949a-4a10-908e-a0ffa509b000 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d037f519c51da9872e77f2e0881a7ccf6f5af218cfc8558010479b59dfb84f54
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ce12541f225c9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   2a97c8ad885d0       kindnet-bgspl                               kube-system
	e69f5d91a1b96       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   d037f519c51da       kube-proxy-q5mpb                            kube-system
	c44d38264b4c8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   1                   43399bda551b8       kube-controller-manager-newest-cni-160257   kube-system
	abf4f184374a5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            1                   0687f443300e9       kube-apiserver-newest-cni-160257            kube-system
	8e9d85a685b55       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      1                   7f3a95c98e0e6       etcd-newest-cni-160257                      kube-system
	1bb5609884005       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            1                   c9a7126ff6c3a       kube-scheduler-newest-cni-160257            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-160257
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-160257
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=newest-cni-160257
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T20_21_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 20:21:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-160257
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 20:22:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 20:22:00 +0000   Thu, 09 Oct 2025 20:21:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 20:22:00 +0000   Thu, 09 Oct 2025 20:21:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 20:22:00 +0000   Thu, 09 Oct 2025 20:21:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 09 Oct 2025 20:22:00 +0000   Thu, 09 Oct 2025 20:21:27 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-160257
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 02c511aadbd449108e4b8c7226050824
	  System UUID:                0382347f-ca4b-4cf8-b386-5e98e49e227d
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-160257                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-bgspl                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-160257             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-160257    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-q5mpb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-160257             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node newest-cni-160257 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node newest-cni-160257 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node newest-cni-160257 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node newest-cni-160257 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node newest-cni-160257 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     34s                kubelet          Node newest-cni-160257 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           29s                node-controller  Node newest-cni-160257 event: Registered Node newest-cni-160257 in Controller
	  Normal   Starting                 15s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15s (x8 over 15s)  kubelet          Node newest-cni-160257 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 15s)  kubelet          Node newest-cni-160257 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 15s)  kubelet          Node newest-cni-160257 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-160257 event: Registered Node newest-cni-160257 in Controller
	
	
	==> dmesg <==
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:04] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:15] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:16] overlayfs: idmapped layers are currently not supported
	[ +23.810739] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:18] overlayfs: idmapped layers are currently not supported
	[ +26.082927] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:19] overlayfs: idmapped layers are currently not supported
	[ +21.956614] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:21] overlayfs: idmapped layers are currently not supported
	[ +16.062221] overlayfs: idmapped layers are currently not supported
	[ +28.876478] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8e9d85a685b554c78f90ad52ce9e2e08feb85c5a0c3c0cecaa44409529755644] <==
	{"level":"warn","ts":"2025-10-09T20:21:58.939460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:58.985516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.060525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.099510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.117387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.161591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.201275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.222208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.268209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.307172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.310041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.321565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.344181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.375240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.382142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.398184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.415492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.454916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.485281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.501225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.531892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.553245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.576732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.601786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:59.686722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57216","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:22:09 up  3:04,  0 user,  load average: 3.96, 3.44, 2.39
	Linux newest-cni-160257 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ce12541f225c961a2d52993ca9aeb9b77f4d1c3c1d6fd17ff705960d66604ae6] <==
	I1009 20:22:02.821472       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:22:02.821984       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1009 20:22:02.822136       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:22:02.822176       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:22:02.822218       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:22:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:22:03.016908       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:22:03.017031       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:22:03.017068       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:22:03.018236       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [abf4f184374a54e8b81747413f26453de47bd605a2aeb2a0889c7f019dc40141] <==
	I1009 20:22:00.740095       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1009 20:22:00.794417       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 20:22:00.809228       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1009 20:22:00.814626       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1009 20:22:00.814653       1 policy_source.go:240] refreshing policies
	I1009 20:22:00.814946       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 20:22:00.828921       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 20:22:00.836809       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 20:22:00.836840       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 20:22:00.836965       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1009 20:22:00.838665       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1009 20:22:00.871928       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 20:22:01.546741       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:22:01.778256       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 20:22:01.903445       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 20:22:01.948991       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:22:01.962636       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:22:01.988367       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 20:22:02.078177       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.99.148"}
	I1009 20:22:02.105346       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.104.146"}
	I1009 20:22:04.112382       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 20:22:04.387764       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 20:22:04.387786       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 20:22:04.543669       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 20:22:04.598317       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [c44d38264b4c8f90676162945fe05a02624867f517df88a401a8ae08e56998fc] <==
	I1009 20:22:04.049599       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 20:22:04.049610       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 20:22:04.049616       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 20:22:04.057331       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1009 20:22:04.065555       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 20:22:04.065858       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1009 20:22:04.075929       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 20:22:04.078982       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 20:22:04.082502       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1009 20:22:04.082617       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1009 20:22:04.083594       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:22:04.083662       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 20:22:04.084835       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 20:22:04.085356       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 20:22:04.086595       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1009 20:22:04.088131       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 20:22:04.090969       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1009 20:22:04.091153       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1009 20:22:04.091282       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1009 20:22:04.093929       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 20:22:04.096629       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 20:22:04.097839       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 20:22:04.101218       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 20:22:04.102827       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 20:22:04.108936       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [e69f5d91a1b96da308ea7785822c1e647575c92e17d9a695bf51f239b3fc3ccd] <==
	I1009 20:22:02.719471       1 server_linux.go:53] "Using iptables proxy"
	I1009 20:22:02.851320       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 20:22:02.967883       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 20:22:02.967927       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1009 20:22:02.968016       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:22:03.007689       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:22:03.007767       1 server_linux.go:132] "Using iptables Proxier"
	I1009 20:22:03.026906       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:22:03.027332       1 server.go:527] "Version info" version="v1.34.1"
	I1009 20:22:03.027546       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:22:03.029377       1 config.go:200] "Starting service config controller"
	I1009 20:22:03.029455       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 20:22:03.029509       1 config.go:106] "Starting endpoint slice config controller"
	I1009 20:22:03.029554       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 20:22:03.029592       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 20:22:03.029631       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 20:22:03.030935       1 config.go:309] "Starting node config controller"
	I1009 20:22:03.031024       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 20:22:03.031080       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 20:22:03.129847       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 20:22:03.129963       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 20:22:03.129994       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1bb5609884005775d5cb2c3c1d622130225e6c83a8497006aa8e75133f859524] <==
	I1009 20:21:58.771959       1 serving.go:386] Generated self-signed cert in-memory
	I1009 20:22:01.667606       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 20:22:01.670253       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:22:01.706298       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 20:22:01.706382       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 20:22:01.706404       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 20:22:01.706427       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 20:22:01.724039       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:22:01.724075       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:22:01.724102       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:22:01.724108       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:22:01.809459       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 20:22:01.825599       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:22:01.825742       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: I1009 20:22:00.863944     727 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-160257"
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: I1009 20:22:00.863973     727 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: I1009 20:22:00.864712     727 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: E1009 20:22:00.934393     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-160257\" already exists" pod="kube-system/kube-scheduler-newest-cni-160257"
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: I1009 20:22:00.934444     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-160257"
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: E1009 20:22:00.950362     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-160257\" already exists" pod="kube-system/etcd-newest-cni-160257"
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: I1009 20:22:00.950413     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-160257"
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: E1009 20:22:00.988701     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-160257\" already exists" pod="kube-system/kube-apiserver-newest-cni-160257"
	Oct 09 20:22:00 newest-cni-160257 kubelet[727]: I1009 20:22:00.988844     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-160257"
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: E1009 20:22:01.019244     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-160257\" already exists" pod="kube-system/kube-controller-manager-newest-cni-160257"
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: E1009 20:22:01.793610     727 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: E1009 20:22:01.794185     727 projected.go:196] Error preparing data for projected volume kube-api-access-tm542 for pod kube-system/kube-proxy-q5mpb: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:newest-cni-160257" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-160257' and this object
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found], failed to sync configmap cache: timed out waiting for the condition]
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: E1009 20:22:01.794305     727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/efd41b4d-05f4-4870-b04c-cca5ec803e68-kube-api-access-tm542 podName:efd41b4d-05f4-4870-b04c-cca5ec803e68 nodeName:}" failed. No retries permitted until 2025-10-09 20:22:02.294276391 +0000 UTC m=+7.757576730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tm542" (UniqueName: "kubernetes.io/projected/efd41b4d-05f4-4870-b04c-cca5ec803e68-kube-api-access-tm542") pod "kube-proxy-q5mpb" (UID: "efd41b4d-05f4-4870-b04c-cca5ec803e68") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:newest-cni-160257" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-160257' and this object
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found], failed to sync configmap cache: timed out waiting for the condition]
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: E1009 20:22:01.794128     727 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: E1009 20:22:01.794904     727 projected.go:196] Error preparing data for projected volume kube-api-access-v8lx2 for pod kube-system/kindnet-bgspl: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:newest-cni-160257" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-160257' and this object
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found], failed to sync configmap cache: timed out waiting for the condition]
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: E1009 20:22:01.794987     727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8f6a466-a843-4773-968c-86550cdbe807-kube-api-access-v8lx2 podName:d8f6a466-a843-4773-968c-86550cdbe807 nodeName:}" failed. No retries permitted until 2025-10-09 20:22:02.294969544 +0000 UTC m=+7.758269883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v8lx2" (UniqueName: "kubernetes.io/projected/d8f6a466-a843-4773-968c-86550cdbe807-kube-api-access-v8lx2") pod "kindnet-bgspl" (UID: "d8f6a466-a843-4773-968c-86550cdbe807") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:newest-cni-160257" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'newest-cni-160257' and this object
	Oct 09 20:22:01 newest-cni-160257 kubelet[727]: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found], failed to sync configmap cache: timed out waiting for the condition]
	Oct 09 20:22:02 newest-cni-160257 kubelet[727]: I1009 20:22:02.315292     727 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 09 20:22:02 newest-cni-160257 kubelet[727]: W1009 20:22:02.557080     727 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b09c68fc79eaa0587a228b9cc096b0eae173cd347de717a0ae93a73ef6ea01b7/crio-2a97c8ad885d0122325e63bae0911cbe824f9686ea71550a710b1c76e2d5ffd6 WatchSource:0}: Error finding container 2a97c8ad885d0122325e63bae0911cbe824f9686ea71550a710b1c76e2d5ffd6: Status 404 returned error can't find the container with id 2a97c8ad885d0122325e63bae0911cbe824f9686ea71550a710b1c76e2d5ffd6
	Oct 09 20:22:04 newest-cni-160257 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 20:22:04 newest-cni-160257 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 20:22:04 newest-cni-160257 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-160257 -n newest-cni-160257
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-160257 -n newest-cni-160257: exit status 2 (490.501288ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-160257 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-h6jjt storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rqbq8 kubernetes-dashboard-855c9754f9-dbw27
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-160257 describe pod coredns-66bc5c9577-h6jjt storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rqbq8 kubernetes-dashboard-855c9754f9-dbw27
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-160257 describe pod coredns-66bc5c9577-h6jjt storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rqbq8 kubernetes-dashboard-855c9754f9-dbw27: exit status 1 (147.644189ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-h6jjt" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-rqbq8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-dbw27" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-160257 describe pod coredns-66bc5c9577-h6jjt storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rqbq8 kubernetes-dashboard-855c9754f9-dbw27: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-417984 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-417984 --alsologtostderr -v=1: exit status 80 (2.130276643s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-417984 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 20:22:08.084711  508168 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:22:08.084934  508168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:22:08.084981  508168 out.go:374] Setting ErrFile to fd 2...
	I1009 20:22:08.085002  508168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:22:08.085339  508168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:22:08.085705  508168 out.go:368] Setting JSON to false
	I1009 20:22:08.085901  508168 mustload.go:65] Loading cluster: default-k8s-diff-port-417984
	I1009 20:22:08.086393  508168 config.go:182] Loaded profile config "default-k8s-diff-port-417984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:22:08.087240  508168 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417984 --format={{.State.Status}}
	I1009 20:22:08.117537  508168 host.go:66] Checking if "default-k8s-diff-port-417984" exists ...
	I1009 20:22:08.118014  508168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:22:08.238631  508168 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 20:22:08.228029825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:22:08.239273  508168 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-417984 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1009 20:22:08.242564  508168 out.go:179] * Pausing node default-k8s-diff-port-417984 ... 
	I1009 20:22:08.246216  508168 host.go:66] Checking if "default-k8s-diff-port-417984" exists ...
	I1009 20:22:08.246591  508168 ssh_runner.go:195] Run: systemctl --version
	I1009 20:22:08.246644  508168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417984
	I1009 20:22:08.289221  508168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/default-k8s-diff-port-417984/id_rsa Username:docker}
	I1009 20:22:08.396420  508168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:08.415108  508168 pause.go:52] kubelet running: true
	I1009 20:22:08.415183  508168 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:22:08.781782  508168 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:22:08.781888  508168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:22:08.882938  508168 cri.go:89] found id: "39930a90aa1c8620eb52278dd704fbf540a2a3e4cf6848c5f5ce913ae24f805f"
	I1009 20:22:08.882963  508168 cri.go:89] found id: "8b461d773e987839de7e20c9cd9bb4948a0996cfcf809a7c1ad3d90725546a55"
	I1009 20:22:08.882968  508168 cri.go:89] found id: "73c61df879c3dc8d5b3227ca55aa7859b8d2457ba7fbefd75e8a149cbe297d0c"
	I1009 20:22:08.882972  508168 cri.go:89] found id: "54c509996be49424eefe920fa96a4572f3da6bccf14cfae32e894928a28527d1"
	I1009 20:22:08.882976  508168 cri.go:89] found id: "c88c6763c4c37baf69c511ec04150bf21aef0cc6fc5e8c7d6be66a050b424afd"
	I1009 20:22:08.882979  508168 cri.go:89] found id: "4eeb90a44de65c7aa6b10b300aa161b1c37aa94a4e93eadfd6975cbb0428c677"
	I1009 20:22:08.882982  508168 cri.go:89] found id: "bef0f8b493af26a97c449506b2fb953144bf49745a3a417030e064059e7b187a"
	I1009 20:22:08.882985  508168 cri.go:89] found id: "c867b182d54580a31fb8f6e96300d3d3a7d7beacfb0c84d96100f68f251ea0f6"
	I1009 20:22:08.882988  508168 cri.go:89] found id: "a5832f172fdf43a40fddfb19a9cd192309bb7216cfb2d490b21e4a51b24a923e"
	I1009 20:22:08.882994  508168 cri.go:89] found id: "b722b93e81fef15b5065babebb7c70b66d2f38666e650edc71def81153950789"
	I1009 20:22:08.882998  508168 cri.go:89] found id: "909b93cd668d2f501c2849a4db47d77b8135382f8833dc953ca0f46547198534"
	I1009 20:22:08.883001  508168 cri.go:89] found id: ""
	I1009 20:22:08.883049  508168 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:22:08.902230  508168 retry.go:31] will retry after 251.28737ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:22:08Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:22:09.154703  508168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:09.171427  508168 pause.go:52] kubelet running: false
	I1009 20:22:09.171499  508168 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:22:09.395836  508168 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:22:09.395928  508168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:22:09.494531  508168 cri.go:89] found id: "39930a90aa1c8620eb52278dd704fbf540a2a3e4cf6848c5f5ce913ae24f805f"
	I1009 20:22:09.494551  508168 cri.go:89] found id: "8b461d773e987839de7e20c9cd9bb4948a0996cfcf809a7c1ad3d90725546a55"
	I1009 20:22:09.494556  508168 cri.go:89] found id: "73c61df879c3dc8d5b3227ca55aa7859b8d2457ba7fbefd75e8a149cbe297d0c"
	I1009 20:22:09.494560  508168 cri.go:89] found id: "54c509996be49424eefe920fa96a4572f3da6bccf14cfae32e894928a28527d1"
	I1009 20:22:09.494563  508168 cri.go:89] found id: "c88c6763c4c37baf69c511ec04150bf21aef0cc6fc5e8c7d6be66a050b424afd"
	I1009 20:22:09.494567  508168 cri.go:89] found id: "4eeb90a44de65c7aa6b10b300aa161b1c37aa94a4e93eadfd6975cbb0428c677"
	I1009 20:22:09.494571  508168 cri.go:89] found id: "bef0f8b493af26a97c449506b2fb953144bf49745a3a417030e064059e7b187a"
	I1009 20:22:09.494574  508168 cri.go:89] found id: "c867b182d54580a31fb8f6e96300d3d3a7d7beacfb0c84d96100f68f251ea0f6"
	I1009 20:22:09.494576  508168 cri.go:89] found id: "a5832f172fdf43a40fddfb19a9cd192309bb7216cfb2d490b21e4a51b24a923e"
	I1009 20:22:09.494582  508168 cri.go:89] found id: "b722b93e81fef15b5065babebb7c70b66d2f38666e650edc71def81153950789"
	I1009 20:22:09.494586  508168 cri.go:89] found id: "909b93cd668d2f501c2849a4db47d77b8135382f8833dc953ca0f46547198534"
	I1009 20:22:09.494589  508168 cri.go:89] found id: ""
	I1009 20:22:09.494639  508168 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:22:09.509281  508168 retry.go:31] will retry after 207.303556ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:22:09Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:22:09.717774  508168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:09.736517  508168 pause.go:52] kubelet running: false
	I1009 20:22:09.736586  508168 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1009 20:22:09.982176  508168 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1009 20:22:09.982261  508168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1009 20:22:10.110425  508168 cri.go:89] found id: "39930a90aa1c8620eb52278dd704fbf540a2a3e4cf6848c5f5ce913ae24f805f"
	I1009 20:22:10.110447  508168 cri.go:89] found id: "8b461d773e987839de7e20c9cd9bb4948a0996cfcf809a7c1ad3d90725546a55"
	I1009 20:22:10.110453  508168 cri.go:89] found id: "73c61df879c3dc8d5b3227ca55aa7859b8d2457ba7fbefd75e8a149cbe297d0c"
	I1009 20:22:10.110457  508168 cri.go:89] found id: "54c509996be49424eefe920fa96a4572f3da6bccf14cfae32e894928a28527d1"
	I1009 20:22:10.110461  508168 cri.go:89] found id: "c88c6763c4c37baf69c511ec04150bf21aef0cc6fc5e8c7d6be66a050b424afd"
	I1009 20:22:10.110465  508168 cri.go:89] found id: "4eeb90a44de65c7aa6b10b300aa161b1c37aa94a4e93eadfd6975cbb0428c677"
	I1009 20:22:10.110468  508168 cri.go:89] found id: "bef0f8b493af26a97c449506b2fb953144bf49745a3a417030e064059e7b187a"
	I1009 20:22:10.110472  508168 cri.go:89] found id: "c867b182d54580a31fb8f6e96300d3d3a7d7beacfb0c84d96100f68f251ea0f6"
	I1009 20:22:10.110475  508168 cri.go:89] found id: "a5832f172fdf43a40fddfb19a9cd192309bb7216cfb2d490b21e4a51b24a923e"
	I1009 20:22:10.110482  508168 cri.go:89] found id: "b722b93e81fef15b5065babebb7c70b66d2f38666e650edc71def81153950789"
	I1009 20:22:10.110485  508168 cri.go:89] found id: "909b93cd668d2f501c2849a4db47d77b8135382f8833dc953ca0f46547198534"
	I1009 20:22:10.110494  508168 cri.go:89] found id: ""
	I1009 20:22:10.110558  508168 ssh_runner.go:195] Run: sudo runc list -f json
	I1009 20:22:10.127918  508168 out.go:203] 
	W1009 20:22:10.130725  508168 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:22:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:22:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1009 20:22:10.130761  508168 out.go:285] * 
	* 
	W1009 20:22:10.136538  508168 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:22:10.139374  508168 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-417984 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-417984
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-417984:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670",
	        "Created": "2025-10-09T20:19:12.869398438Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500475,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:20:59.998622598Z",
	            "FinishedAt": "2025-10-09T20:20:58.890170224Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/hosts",
	        "LogPath": "/var/lib/docker/containers/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670-json.log",
	        "Name": "/default-k8s-diff-port-417984",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-417984:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-417984",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670",
	                "LowerDir": "/var/lib/docker/overlay2/69199908f673e21207f723026f89b47767e510c7bef43a60d014ab9a5dff4f7d-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69199908f673e21207f723026f89b47767e510c7bef43a60d014ab9a5dff4f7d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69199908f673e21207f723026f89b47767e510c7bef43a60d014ab9a5dff4f7d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69199908f673e21207f723026f89b47767e510c7bef43a60d014ab9a5dff4f7d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-417984",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-417984/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-417984",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-417984",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-417984",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7f49815c2c562d899a022955dbba7724c161cb955a470a67390835f36f303efc",
	            "SandboxKey": "/var/run/docker/netns/7f49815c2c56",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-417984": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:62:fb:00:89:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "08acd2192c7aac80b9d6df51ab71eaa1736eaa95c3d16e0c4f8feb8f8a4a1db2",
	                    "EndpointID": "1f7d1bfb7257d6b58e33e4ae7ce309d90aad5e6896a7788d1f4ef7a021629616",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-417984",
	                        "1f0d0c8a230b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-417984 -n default-k8s-diff-port-417984
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-417984 -n default-k8s-diff-port-417984: exit status 2 (541.194371ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-417984 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-417984 logs -n 25: (1.822566701s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-020313                                                                                                                                                                                                                          │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ delete  │ -p no-preload-020313                                                                                                                                                                                                                          │ no-preload-020313            │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ delete  │ -p disable-driver-mounts-613966                                                                                                                                                                                                               │ disable-driver-mounts-613966 │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ start   │ -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-565110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │                     │
	│ stop    │ -p embed-certs-565110 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-565110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ start   │ -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-417984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-417984 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:20 UTC │
	│ image   │ embed-certs-565110 image list --format=json                                                                                                                                                                                                   │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:20 UTC │
	│ pause   │ -p embed-certs-565110 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-417984 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:20 UTC │
	│ delete  │ -p embed-certs-565110                                                                                                                                                                                                                         │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:21 UTC │
	│ start   │ -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:21 UTC │
	│ delete  │ -p embed-certs-565110                                                                                                                                                                                                                         │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ start   │ -p newest-cni-160257 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-160257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │                     │
	│ stop    │ -p newest-cni-160257 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ addons  │ enable dashboard -p newest-cni-160257 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ start   │ -p newest-cni-160257 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:22 UTC │
	│ image   │ newest-cni-160257 image list --format=json                                                                                                                                                                                                    │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:22 UTC │ 09 Oct 25 20:22 UTC │
	│ pause   │ -p newest-cni-160257 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:22 UTC │                     │
	│ image   │ default-k8s-diff-port-417984 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:22 UTC │ 09 Oct 25 20:22 UTC │
	│ pause   │ -p default-k8s-diff-port-417984 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:21:47
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:21:47.225777  505641 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:21:47.225898  505641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:21:47.225907  505641 out.go:374] Setting ErrFile to fd 2...
	I1009 20:21:47.225913  505641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:21:47.226184  505641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:21:47.226608  505641 out.go:368] Setting JSON to false
	I1009 20:21:47.227622  505641 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11047,"bootTime":1760030261,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:21:47.227724  505641 start.go:143] virtualization:  
	I1009 20:21:47.232792  505641 out.go:179] * [newest-cni-160257] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:21:47.236046  505641 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:21:47.236091  505641 notify.go:221] Checking for updates...
	I1009 20:21:47.244521  505641 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:21:47.247836  505641 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:21:47.252638  505641 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:21:47.255969  505641 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:21:47.259775  505641 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:21:47.263692  505641 config.go:182] Loaded profile config "newest-cni-160257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:21:47.264262  505641 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:21:47.300892  505641 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:21:47.301004  505641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:21:47.375444  505641 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:21:47.364461238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:21:47.375566  505641 docker.go:319] overlay module found
	I1009 20:21:47.378702  505641 out.go:179] * Using the docker driver based on existing profile
	I1009 20:21:47.381764  505641 start.go:309] selected driver: docker
	I1009 20:21:47.381788  505641 start.go:930] validating driver "docker" against &{Name:newest-cni-160257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:21:47.381896  505641 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:21:47.382683  505641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:21:47.444269  505641 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:21:47.43523216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:21:47.444618  505641 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 20:21:47.444655  505641 cni.go:84] Creating CNI manager for ""
	I1009 20:21:47.444713  505641 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:21:47.444753  505641 start.go:353] cluster config:
	{Name:newest-cni-160257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:21:47.449762  505641 out.go:179] * Starting "newest-cni-160257" primary control-plane node in "newest-cni-160257" cluster
	I1009 20:21:47.452638  505641 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 20:21:47.455508  505641 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 20:21:47.458251  505641 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:21:47.458312  505641 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 20:21:47.458325  505641 cache.go:58] Caching tarball of preloaded images
	I1009 20:21:47.458336  505641 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 20:21:47.458406  505641 preload.go:233] Found /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 20:21:47.458416  505641 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 20:21:47.458536  505641 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/config.json ...
	I1009 20:21:47.478231  505641 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 20:21:47.478255  505641 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 20:21:47.478273  505641 cache.go:232] Successfully downloaded all kic artifacts
	I1009 20:21:47.478297  505641 start.go:361] acquireMachinesLock for newest-cni-160257: {Name:mkab4aa92a505aec53d4bce517e62dd4f38ff19e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:21:47.478362  505641 start.go:365] duration metric: took 36.932µs to acquireMachinesLock for "newest-cni-160257"
	I1009 20:21:47.478381  505641 start.go:97] Skipping create...Using existing machine configuration
	I1009 20:21:47.478392  505641 fix.go:55] fixHost starting: 
	I1009 20:21:47.478676  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:47.498453  505641 fix.go:113] recreateIfNeeded on newest-cni-160257: state=Stopped err=<nil>
	W1009 20:21:47.498498  505641 fix.go:139] unexpected machine state, will restart: <nil>
	W1009 20:21:46.391695  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	W1009 20:21:48.892695  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	I1009 20:21:47.501793  505641 out.go:252] * Restarting existing docker container for "newest-cni-160257" ...
	I1009 20:21:47.501900  505641 cli_runner.go:164] Run: docker start newest-cni-160257
	I1009 20:21:47.774834  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:47.798720  505641 kic.go:430] container "newest-cni-160257" state is running.
	I1009 20:21:47.799854  505641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-160257
	I1009 20:21:47.826771  505641 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/config.json ...
	I1009 20:21:47.827119  505641 machine.go:93] provisionDockerMachine start ...
	I1009 20:21:47.827296  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:47.853865  505641 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:47.854202  505641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1009 20:21:47.854217  505641 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:21:47.854805  505641 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37994->127.0.0.1:33461: read: connection reset by peer
	I1009 20:21:51.009336  505641 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-160257
	
	I1009 20:21:51.009371  505641 ubuntu.go:182] provisioning hostname "newest-cni-160257"
	I1009 20:21:51.009454  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:51.028512  505641 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:51.028834  505641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1009 20:21:51.028846  505641 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-160257 && echo "newest-cni-160257" | sudo tee /etc/hostname
	I1009 20:21:51.195197  505641 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-160257
	
	I1009 20:21:51.195290  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:51.214100  505641 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:51.214407  505641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1009 20:21:51.214425  505641 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-160257' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-160257/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-160257' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:21:51.385834  505641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:21:51.385861  505641 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-294150/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-294150/.minikube}
	I1009 20:21:51.385883  505641 ubuntu.go:190] setting up certificates
	I1009 20:21:51.385893  505641 provision.go:84] configureAuth start
	I1009 20:21:51.385966  505641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-160257
	I1009 20:21:51.410176  505641 provision.go:143] copyHostCerts
	I1009 20:21:51.410244  505641 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem, removing ...
	I1009 20:21:51.410265  505641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem
	I1009 20:21:51.410352  505641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/ca.pem (1078 bytes)
	I1009 20:21:51.410465  505641 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem, removing ...
	I1009 20:21:51.410477  505641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem
	I1009 20:21:51.410505  505641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/cert.pem (1123 bytes)
	I1009 20:21:51.410574  505641 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem, removing ...
	I1009 20:21:51.410588  505641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem
	I1009 20:21:51.410615  505641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-294150/.minikube/key.pem (1679 bytes)
	I1009 20:21:51.410679  505641 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem org=jenkins.newest-cni-160257 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-160257]
	I1009 20:21:51.863331  505641 provision.go:177] copyRemoteCerts
	I1009 20:21:51.863402  505641 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:21:51.863461  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:51.880865  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:51.993391  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 20:21:52.015149  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:21:52.036074  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:21:52.055481  505641 provision.go:87] duration metric: took 669.560055ms to configureAuth
	I1009 20:21:52.055507  505641 ubuntu.go:206] setting minikube options for container-runtime
	I1009 20:21:52.055721  505641 config.go:182] Loaded profile config "newest-cni-160257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:21:52.055831  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:52.074474  505641 main.go:141] libmachine: Using SSH client type: native
	I1009 20:21:52.074899  505641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1009 20:21:52.074924  505641 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:21:52.387858  505641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:21:52.387888  505641 machine.go:96] duration metric: took 4.560754136s to provisionDockerMachine
	I1009 20:21:52.387899  505641 start.go:294] postStartSetup for "newest-cni-160257" (driver="docker")
	I1009 20:21:52.387910  505641 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:21:52.387969  505641 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:21:52.388017  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:52.412217  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:52.526292  505641 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:21:52.530482  505641 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 20:21:52.530553  505641 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 20:21:52.530579  505641 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/addons for local assets ...
	I1009 20:21:52.530667  505641 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-294150/.minikube/files for local assets ...
	I1009 20:21:52.530782  505641 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem -> 2960022.pem in /etc/ssl/certs
	I1009 20:21:52.530942  505641 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:21:52.540673  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:21:52.563308  505641 start.go:297] duration metric: took 175.393484ms for postStartSetup
	I1009 20:21:52.563461  505641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 20:21:52.563528  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:52.580881  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:52.682184  505641 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 20:21:52.686923  505641 fix.go:57] duration metric: took 5.208523021s for fixHost
	I1009 20:21:52.686956  505641 start.go:84] releasing machines lock for "newest-cni-160257", held for 5.208584913s
	I1009 20:21:52.687039  505641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-160257
	I1009 20:21:52.704600  505641 ssh_runner.go:195] Run: cat /version.json
	I1009 20:21:52.704636  505641 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:21:52.704650  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:52.704689  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:52.732613  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:52.746816  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:52.956865  505641 ssh_runner.go:195] Run: systemctl --version
	I1009 20:21:52.963451  505641 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:21:53.007462  505641 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:21:53.013409  505641 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:21:53.013495  505641 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:21:53.022944  505641 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 20:21:53.022975  505641 start.go:496] detecting cgroup driver to use...
	I1009 20:21:53.023044  505641 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 20:21:53.023144  505641 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:21:53.038534  505641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:21:53.052168  505641 docker.go:218] disabling cri-docker service (if available) ...
	I1009 20:21:53.052276  505641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:21:53.068820  505641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:21:53.083319  505641 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:21:53.207043  505641 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:21:53.336124  505641 docker.go:234] disabling docker service ...
	I1009 20:21:53.336193  505641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:21:53.352807  505641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:21:53.366353  505641 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:21:53.489904  505641 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:21:53.611315  505641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:21:53.625318  505641 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:21:53.641664  505641 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 20:21:53.641780  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.651576  505641 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:21:53.651646  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.662547  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.671928  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.681650  505641 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:21:53.690299  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.699940  505641 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.709708  505641 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:21:53.719273  505641 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:21:53.728115  505641 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:21:53.736223  505641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:53.859445  505641 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:21:54.009289  505641 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:21:54.009385  505641 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:21:54.014669  505641 start.go:564] Will wait 60s for crictl version
	I1009 20:21:54.014762  505641 ssh_runner.go:195] Run: which crictl
	I1009 20:21:54.018778  505641 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 20:21:54.044923  505641 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 20:21:54.045032  505641 ssh_runner.go:195] Run: crio --version
	I1009 20:21:54.077028  505641 ssh_runner.go:195] Run: crio --version
	I1009 20:21:54.111747  505641 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 20:21:54.114671  505641 cli_runner.go:164] Run: docker network inspect newest-cni-160257 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 20:21:54.130922  505641 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 20:21:54.134873  505641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:21:54.150070  505641 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1009 20:21:50.893834  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	W1009 20:21:53.392803  500265 pod_ready.go:104] pod "coredns-66bc5c9577-4c2vb" is not "Ready", error: <nil>
	I1009 20:21:54.393013  500265 pod_ready.go:94] pod "coredns-66bc5c9577-4c2vb" is "Ready"
	I1009 20:21:54.393038  500265 pod_ready.go:86] duration metric: took 32.006387261s for pod "coredns-66bc5c9577-4c2vb" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.396978  500265 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.403052  500265 pod_ready.go:94] pod "etcd-default-k8s-diff-port-417984" is "Ready"
	I1009 20:21:54.403075  500265 pod_ready.go:86] duration metric: took 6.075564ms for pod "etcd-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.406444  500265 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.412180  500265 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-417984" is "Ready"
	I1009 20:21:54.412203  500265 pod_ready.go:86] duration metric: took 5.733758ms for pod "kube-apiserver-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.415033  500265 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.152910  505641 kubeadm.go:883] updating cluster {Name:newest-cni-160257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:21:54.153074  505641 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 20:21:54.153263  505641 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:21:54.201986  505641 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:21:54.202008  505641 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:21:54.202092  505641 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:21:54.230381  505641 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:21:54.230446  505641 cache_images.go:85] Images are preloaded, skipping loading
	I1009 20:21:54.230479  505641 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1009 20:21:54.230592  505641 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-160257 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:21:54.230706  505641 ssh_runner.go:195] Run: crio config
	I1009 20:21:54.302972  505641 cni.go:84] Creating CNI manager for ""
	I1009 20:21:54.303001  505641 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 20:21:54.303044  505641 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1009 20:21:54.303084  505641 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-160257 NodeName:newest-cni-160257 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:21:54.303315  505641 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-160257"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:21:54.303416  505641 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 20:21:54.315581  505641 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:21:54.315723  505641 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:21:54.323840  505641 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 20:21:54.344587  505641 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:21:54.359645  505641 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1009 20:21:54.373013  505641 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 20:21:54.376761  505641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:21:54.386749  505641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:54.517399  505641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:21:54.534719  505641 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257 for IP: 192.168.76.2
	I1009 20:21:54.534737  505641 certs.go:195] generating shared ca certs ...
	I1009 20:21:54.534761  505641 certs.go:227] acquiring lock for ca certs: {Name:mk21fccf7de12baa51e226eec44546b42b579a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:54.534896  505641 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key
	I1009 20:21:54.534936  505641 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key
	I1009 20:21:54.534943  505641 certs.go:257] generating profile certs ...
	I1009 20:21:54.535020  505641 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/client.key
	I1009 20:21:54.535080  505641 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.key.f76169c2
	I1009 20:21:54.535117  505641 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/proxy-client.key
	I1009 20:21:54.535227  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem (1338 bytes)
	W1009 20:21:54.535254  505641 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002_empty.pem, impossibly tiny 0 bytes
	I1009 20:21:54.535262  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:21:54.535293  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/ca.pem (1078 bytes)
	I1009 20:21:54.535320  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:21:54.535341  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/certs/key.pem (1679 bytes)
	I1009 20:21:54.535381  505641 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem (1708 bytes)
	I1009 20:21:54.535945  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:21:54.556736  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 20:21:54.577440  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:21:54.598476  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:21:54.618936  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 20:21:54.641991  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:21:54.675026  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:21:54.699287  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/newest-cni-160257/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:21:54.720084  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:21:54.749667  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/certs/296002.pem --> /usr/share/ca-certificates/296002.pem (1338 bytes)
	I1009 20:21:54.779863  505641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/ssl/certs/2960022.pem --> /usr/share/ca-certificates/2960022.pem (1708 bytes)
	I1009 20:21:54.802813  505641 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:21:54.820188  505641 ssh_runner.go:195] Run: openssl version
	I1009 20:21:54.828192  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:21:54.838549  505641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:21:54.844886  505641 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:21:54.845006  505641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:21:54.901921  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:21:54.911134  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/296002.pem && ln -fs /usr/share/ca-certificates/296002.pem /etc/ssl/certs/296002.pem"
	I1009 20:21:54.919928  505641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296002.pem
	I1009 20:21:54.924078  505641 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:08 /usr/share/ca-certificates/296002.pem
	I1009 20:21:54.924152  505641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296002.pem
	I1009 20:21:54.965535  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/296002.pem /etc/ssl/certs/51391683.0"
	I1009 20:21:54.974008  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2960022.pem && ln -fs /usr/share/ca-certificates/2960022.pem /etc/ssl/certs/2960022.pem"
	I1009 20:21:54.983672  505641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2960022.pem
	I1009 20:21:54.987949  505641 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:08 /usr/share/ca-certificates/2960022.pem
	I1009 20:21:54.988063  505641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2960022.pem
	I1009 20:21:55.032343  505641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2960022.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:21:55.042132  505641 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:21:55.047524  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:21:55.091675  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:21:55.150741  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:21:55.227909  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:21:55.337070  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:21:55.407552  505641 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:21:55.474195  505641 kubeadm.go:400] StartCluster: {Name:newest-cni-160257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-160257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:21:55.474304  505641 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:21:55.474430  505641 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:21:55.531354  505641 cri.go:89] found id: "c44d38264b4c8f90676162945fe05a02624867f517df88a401a8ae08e56998fc"
	I1009 20:21:55.531377  505641 cri.go:89] found id: "abf4f184374a54e8b81747413f26453de47bd605a2aeb2a0889c7f019dc40141"
	I1009 20:21:55.531383  505641 cri.go:89] found id: "8e9d85a685b554c78f90ad52ce9e2e08feb85c5a0c3c0cecaa44409529755644"
	I1009 20:21:55.531387  505641 cri.go:89] found id: "1bb5609884005775d5cb2c3c1d622130225e6c83a8497006aa8e75133f859524"
	I1009 20:21:55.531391  505641 cri.go:89] found id: ""
	I1009 20:21:55.531471  505641 ssh_runner.go:195] Run: sudo runc list -f json
	W1009 20:21:55.544796  505641 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T20:21:55Z" level=error msg="open /run/runc: no such file or directory"
	I1009 20:21:55.544919  505641 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:21:55.554602  505641 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 20:21:55.554623  505641 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 20:21:55.554701  505641 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:21:55.564787  505641 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:21:55.565500  505641 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-160257" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:21:55.565859  505641 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-294150/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-160257" cluster setting kubeconfig missing "newest-cni-160257" context setting]
	I1009 20:21:55.566392  505641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:55.568289  505641 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:21:55.578851  505641 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1009 20:21:55.578897  505641 kubeadm.go:601] duration metric: took 24.267047ms to restartPrimaryControlPlane
	I1009 20:21:55.578907  505641 kubeadm.go:402] duration metric: took 104.740677ms to StartCluster
	I1009 20:21:55.578944  505641 settings.go:142] acquiring lock: {Name:mk20228ebaa2294ae35726600a0d8058088b24a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:55.579059  505641 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:21:55.580100  505641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-294150/kubeconfig: {Name:mke4661344aa77e10bf6690825e8d3a29b29b411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:55.580385  505641 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:21:55.580788  505641 config.go:182] Loaded profile config "newest-cni-160257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:21:55.581007  505641 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:21:55.581158  505641 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-160257"
	I1009 20:21:55.581195  505641 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-160257"
	W1009 20:21:55.581214  505641 addons.go:247] addon storage-provisioner should already be in state true
	I1009 20:21:55.581264  505641 host.go:66] Checking if "newest-cni-160257" exists ...
	I1009 20:21:55.581895  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:55.582119  505641 addons.go:69] Setting dashboard=true in profile "newest-cni-160257"
	I1009 20:21:55.582152  505641 addons.go:238] Setting addon dashboard=true in "newest-cni-160257"
	W1009 20:21:55.582166  505641 addons.go:247] addon dashboard should already be in state true
	I1009 20:21:55.582202  505641 host.go:66] Checking if "newest-cni-160257" exists ...
	I1009 20:21:55.582545  505641 addons.go:69] Setting default-storageclass=true in profile "newest-cni-160257"
	I1009 20:21:55.582570  505641 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-160257"
	I1009 20:21:55.582699  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:55.582865  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:55.587168  505641 out.go:179] * Verifying Kubernetes components...
	I1009 20:21:55.590497  505641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:55.647593  505641 addons.go:238] Setting addon default-storageclass=true in "newest-cni-160257"
	W1009 20:21:55.647617  505641 addons.go:247] addon default-storageclass should already be in state true
	I1009 20:21:55.647642  505641 host.go:66] Checking if "newest-cni-160257" exists ...
	I1009 20:21:55.648040  505641 cli_runner.go:164] Run: docker container inspect newest-cni-160257 --format={{.State.Status}}
	I1009 20:21:55.651603  505641 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 20:21:55.654619  505641 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:21:55.657431  505641 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1009 20:21:54.590365  500265 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-417984" is "Ready"
	I1009 20:21:54.590394  500265 pod_ready.go:86] duration metric: took 175.292139ms for pod "kube-controller-manager-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:54.791524  500265 pod_ready.go:83] waiting for pod "kube-proxy-jnlzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:55.191059  500265 pod_ready.go:94] pod "kube-proxy-jnlzf" is "Ready"
	I1009 20:21:55.191086  500265 pod_ready.go:86] duration metric: took 399.520534ms for pod "kube-proxy-jnlzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:55.401649  500265 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:55.790702  500265 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-417984" is "Ready"
	I1009 20:21:55.790734  500265 pod_ready.go:86] duration metric: took 389.05888ms for pod "kube-scheduler-default-k8s-diff-port-417984" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 20:21:55.790747  500265 pod_ready.go:40] duration metric: took 33.412767938s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 20:21:55.908403  500265 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:21:55.912550  500265 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-417984" cluster and "default" namespace by default
	I1009 20:21:55.657478  505641 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:55.657494  505641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:21:55.657567  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:55.660364  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 20:21:55.660398  505641 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 20:21:55.660470  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:55.697377  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:55.699465  505641 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:55.699490  505641 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:21:55.699555  505641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-160257
	I1009 20:21:55.731795  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:55.737875  505641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/newest-cni-160257/id_rsa Username:docker}
	I1009 20:21:56.004907  505641 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:21:56.076607  505641 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:21:56.076689  505641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:56.092264  505641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:56.114169  505641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:56.195392  505641 api_server.go:72] duration metric: took 614.960541ms to wait for apiserver process to appear ...
	I1009 20:21:56.195421  505641 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:21:56.195440  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:21:56.220796  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 20:21:56.220824  505641 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 20:21:56.317005  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 20:21:56.317034  505641 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 20:21:56.422522  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 20:21:56.422572  505641 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 20:21:56.531775  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 20:21:56.531796  505641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1009 20:21:56.547920  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 20:21:56.547943  505641 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1009 20:21:56.564206  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 20:21:56.564229  505641 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 20:21:56.579829  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 20:21:56.579853  505641 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 20:21:56.595478  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 20:21:56.595502  505641 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 20:21:56.610137  505641 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 20:21:56.610170  505641 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 20:21:56.625847  505641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 20:22:00.620389  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:22:00.620423  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:22:00.620437  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:00.703423  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1009 20:22:00.703455  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1009 20:22:00.703471  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:00.748276  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1009 20:22:00.748306  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1009 20:22:00.909192  505641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.816888973s)
	I1009 20:22:01.196456  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:01.216282  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:22:01.216316  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:22:01.696494  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:01.739358  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:22:01.739386  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:22:02.194638  505641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.568746238s)
	I1009 20:22:02.194890  505641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.080695838s)
	I1009 20:22:02.195688  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:02.197926  505641 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-160257 addons enable metrics-server
	
	I1009 20:22:02.200869  505641 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1009 20:22:02.203855  505641 addons.go:514] duration metric: took 6.622867491s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1009 20:22:02.205011  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:22:02.205039  505641 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:22:02.695565  505641 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 20:22:02.707753  505641 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1009 20:22:02.710100  505641 api_server.go:141] control plane version: v1.34.1
	I1009 20:22:02.710127  505641 api_server.go:131] duration metric: took 6.514699514s to wait for apiserver health ...
	I1009 20:22:02.710137  505641 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:02.720979  505641 system_pods.go:59] 8 kube-system pods found
	I1009 20:22:02.721011  505641 system_pods.go:61] "coredns-66bc5c9577-h6jjt" [48d28596-1503-4675-b84d-a0770eea0d66] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1009 20:22:02.721020  505641 system_pods.go:61] "etcd-newest-cni-160257" [7c59b451-dfcc-492f-a84f-2b02319332fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:22:02.721029  505641 system_pods.go:61] "kindnet-bgspl" [d8f6a466-a843-4773-968c-86550cdbe807] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1009 20:22:02.721038  505641 system_pods.go:61] "kube-apiserver-newest-cni-160257" [12beea36-feb5-44e6-8093-e6627a7c0bc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:22:02.721046  505641 system_pods.go:61] "kube-controller-manager-newest-cni-160257" [d721fd3e-4510-4c9d-8156-1389f2c157e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:22:02.721053  505641 system_pods.go:61] "kube-proxy-q5mpb" [efd41b4d-05f4-4870-b04c-cca5ec803e68] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 20:22:02.721066  505641 system_pods.go:61] "kube-scheduler-newest-cni-160257" [80050cec-2104-4888-a8e1-611f33e21d87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:22:02.721085  505641 system_pods.go:61] "storage-provisioner" [d17148c8-3517-4026-aa73-4a1705edbddf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1009 20:22:02.721095  505641 system_pods.go:74] duration metric: took 10.948946ms to wait for pod list to return data ...
	I1009 20:22:02.721104  505641 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:02.730183  505641 default_sa.go:45] found service account: "default"
	I1009 20:22:02.730206  505641 default_sa.go:55] duration metric: took 9.07643ms for default service account to be created ...
	I1009 20:22:02.730219  505641 kubeadm.go:586] duration metric: took 7.149792386s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 20:22:02.730236  505641 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:02.742358  505641 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 20:22:02.742448  505641 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:02.742476  505641 node_conditions.go:105] duration metric: took 12.233114ms to run NodePressure ...
	I1009 20:22:02.742518  505641 start.go:242] waiting for startup goroutines ...
	I1009 20:22:02.742544  505641 start.go:247] waiting for cluster config update ...
	I1009 20:22:02.742587  505641 start.go:256] writing updated cluster config ...
	I1009 20:22:02.742967  505641 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:02.844541  505641 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1009 20:22:02.849754  505641 out.go:179] * Done! kubectl is now configured to use "newest-cni-160257" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.270477016Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=17fdd05e-b724-447a-95c7-628014c429d9 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.273432368Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=10d7aeb8-8a8c-4868-a425-dce1267d81cf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.274493846Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.285094956Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.285462132Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/614463f11f91e4c335f683125bd173f6881f31b46f8daed1f0bb30c8aa09b1b8/merged/etc/passwd: no such file or directory"
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.285560324Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/614463f11f91e4c335f683125bd173f6881f31b46f8daed1f0bb30c8aa09b1b8/merged/etc/group: no such file or directory"
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.28590153Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.312578035Z" level=info msg="Created container 39930a90aa1c8620eb52278dd704fbf540a2a3e4cf6848c5f5ce913ae24f805f: kube-system/storage-provisioner/storage-provisioner" id=10d7aeb8-8a8c-4868-a425-dce1267d81cf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.313327895Z" level=info msg="Starting container: 39930a90aa1c8620eb52278dd704fbf540a2a3e4cf6848c5f5ce913ae24f805f" id=c9d6011c-de74-4d98-b837-4df78bff8ff6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.318393411Z" level=info msg="Started container" PID=1639 containerID=39930a90aa1c8620eb52278dd704fbf540a2a3e4cf6848c5f5ce913ae24f805f description=kube-system/storage-provisioner/storage-provisioner id=c9d6011c-de74-4d98-b837-4df78bff8ff6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c516db9f25a9d303231bec39441c8899cf3f08989687192c201d09bdc234f5d
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.220096055Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.225625093Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.225672782Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.225700187Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.230490131Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.230663433Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.230742228Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.23524681Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.235289494Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.235312837Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.240437923Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.240612523Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.240703379Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.24657519Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.246616389Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	39930a90aa1c8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago       Running             storage-provisioner         2                   4c516db9f25a9       storage-provisioner                                    kube-system
	b722b93e81fef       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   aa4b89649b82b       dashboard-metrics-scraper-6ffb444bf9-h6nw2             kubernetes-dashboard
	909b93cd668d2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago       Running             kubernetes-dashboard        0                   6f0e643618745       kubernetes-dashboard-855c9754f9-m9vdk                  kubernetes-dashboard
	8b461d773e987       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago       Running             coredns                     1                   205d9f57475be       coredns-66bc5c9577-4c2vb                               kube-system
	f93aeaa06b466       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago       Running             busybox                     1                   0a7e76f592278       busybox                                                default
	73c61df879c3d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago       Running             kindnet-cni                 1                   10b48284ffa3a       kindnet-s57gp                                          kube-system
	54c509996be49       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago       Running             kube-proxy                  1                   a28ffe11e9dda       kube-proxy-jnlzf                                       kube-system
	c88c6763c4c37       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago       Exited              storage-provisioner         1                   4c516db9f25a9       storage-provisioner                                    kube-system
	4eeb90a44de65       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   ebca30513231d       kube-controller-manager-default-k8s-diff-port-417984   kube-system
	bef0f8b493af2       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   a9ecffa94ebcf       kube-apiserver-default-k8s-diff-port-417984            kube-system
	c867b182d5458       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   0481e52c4d63c       etcd-default-k8s-diff-port-417984                      kube-system
	a5832f172fdf4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   ef4d38d8ce3cc       kube-scheduler-default-k8s-diff-port-417984            kube-system
	
	
	==> coredns [8b461d773e987839de7e20c9cd9bb4948a0996cfcf809a7c1ad3d90725546a55] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34987 - 7557 "HINFO IN 8584361611982215924.4915455720188016414. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020362057s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-417984
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-417984
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=default-k8s-diff-port-417984
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T20_19_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 20:19:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-417984
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 20:22:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 20:21:40 +0000   Thu, 09 Oct 2025 20:19:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 20:21:40 +0000   Thu, 09 Oct 2025 20:19:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 20:21:40 +0000   Thu, 09 Oct 2025 20:19:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 20:21:40 +0000   Thu, 09 Oct 2025 20:20:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-417984
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 b322aa4b5a934aefb512ef2cf8432ce2
	  System UUID:                47844709-b89d-494e-8261-a7f5aabcecf0
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-4c2vb                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-default-k8s-diff-port-417984                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-s57gp                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-417984             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-417984    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-jnlzf                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-default-k8s-diff-port-417984             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-h6nw2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-m9vdk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 49s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x8 over 2m35s)  kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m27s                  kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m27s                  kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m27s                  kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m23s                  node-controller  Node default-k8s-diff-port-417984 event: Registered Node default-k8s-diff-port-417984 in Controller
	  Normal   NodeReady                100s                   kubelet          Node default-k8s-diff-port-417984 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                    node-controller  Node default-k8s-diff-port-417984 event: Registered Node default-k8s-diff-port-417984 in Controller
	
	
	==> dmesg <==
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:04] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:15] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:16] overlayfs: idmapped layers are currently not supported
	[ +23.810739] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:18] overlayfs: idmapped layers are currently not supported
	[ +26.082927] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:19] overlayfs: idmapped layers are currently not supported
	[ +21.956614] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:21] overlayfs: idmapped layers are currently not supported
	[ +16.062221] overlayfs: idmapped layers are currently not supported
	[ +28.876478] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c867b182d54580a31fb8f6e96300d3d3a7d7beacfb0c84d96100f68f251ea0f6] <==
	{"level":"warn","ts":"2025-10-09T20:21:16.285352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.373515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.419879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.473193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.546888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.613417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.688052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.721684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.782596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.836487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.887141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.944150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.988795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.075500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.087809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.131964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.196372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.248388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.303376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.349092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.432522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.458799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.501304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.549853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.705202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51516","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:22:12 up  3:04,  0 user,  load average: 3.80, 3.41, 2.39
	Linux default-k8s-diff-port-417984 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [73c61df879c3dc8d5b3227ca55aa7859b8d2457ba7fbefd75e8a149cbe297d0c] <==
	I1009 20:21:20.976008       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:21:20.994307       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 20:21:20.994479       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:21:20.994496       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:21:20.994511       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:21:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:21:21.217877       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:21:21.217903       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:21:21.217913       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:21:21.218268       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 20:21:51.213910       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1009 20:21:51.218529       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 20:21:51.218645       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 20:21:51.218773       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1009 20:21:52.818769       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 20:21:52.818886       1 metrics.go:72] Registering metrics
	I1009 20:21:52.818969       1 controller.go:711] "Syncing nftables rules"
	I1009 20:22:01.219768       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 20:22:01.219823       1 main.go:301] handling current node
	I1009 20:22:11.221565       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 20:22:11.221605       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bef0f8b493af26a97c449506b2fb953144bf49745a3a417030e064059e7b187a] <==
	I1009 20:21:19.191789       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1009 20:21:19.191812       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 20:21:19.191888       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1009 20:21:19.191921       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 20:21:19.208135       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 20:21:19.208612       1 aggregator.go:171] initial CRD sync complete...
	I1009 20:21:19.208622       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 20:21:19.208627       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 20:21:19.208633       1 cache.go:39] Caches are synced for autoregister controller
	I1009 20:21:19.225243       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1009 20:21:19.229520       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 20:21:19.249275       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1009 20:21:19.249307       1 policy_source.go:240] refreshing policies
	I1009 20:21:19.296934       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 20:21:19.688740       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 20:21:19.924202       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:21:21.472384       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 20:21:21.603435       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 20:21:21.656213       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:21:21.691769       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:21:21.894012       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.25.142"}
	I1009 20:21:21.943199       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.118.165"}
	I1009 20:21:23.494394       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 20:21:23.683113       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 20:21:23.807398       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4eeb90a44de65c7aa6b10b300aa161b1c37aa94a4e93eadfd6975cbb0428c677] <==
	I1009 20:21:23.140484       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1009 20:21:23.140632       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1009 20:21:23.140883       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1009 20:21:23.141355       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 20:21:23.141427       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 20:21:23.141474       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 20:21:23.141505       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 20:21:23.141635       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1009 20:21:23.142227       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 20:21:23.143425       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1009 20:21:23.143437       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 20:21:23.143447       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 20:21:23.143454       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1009 20:21:23.143461       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 20:21:23.143469       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1009 20:21:23.143480       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 20:21:23.143489       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 20:21:23.147831       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 20:21:23.147909       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 20:21:23.152104       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1009 20:21:23.189648       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:21:23.189756       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 20:21:23.189814       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 20:21:23.213397       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:21:23.700214       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [54c509996be49424eefe920fa96a4572f3da6bccf14cfae32e894928a28527d1] <==
	I1009 20:21:21.422879       1 server_linux.go:53] "Using iptables proxy"
	I1009 20:21:21.567636       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 20:21:21.669189       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 20:21:21.669234       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 20:21:21.669300       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:21:22.357263       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:21:22.360417       1 server_linux.go:132] "Using iptables Proxier"
	I1009 20:21:22.433208       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:21:22.433621       1 server.go:527] "Version info" version="v1.34.1"
	I1009 20:21:22.433647       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:21:22.434874       1 config.go:200] "Starting service config controller"
	I1009 20:21:22.434897       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 20:21:22.434913       1 config.go:106] "Starting endpoint slice config controller"
	I1009 20:21:22.434917       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 20:21:22.434934       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 20:21:22.434938       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 20:21:22.441627       1 config.go:309] "Starting node config controller"
	I1009 20:21:22.441716       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 20:21:22.441753       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 20:21:22.536310       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 20:21:22.536676       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 20:21:22.536776       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a5832f172fdf43a40fddfb19a9cd192309bb7216cfb2d490b21e4a51b24a923e] <==
	I1009 20:21:16.752147       1 serving.go:386] Generated self-signed cert in-memory
	I1009 20:21:19.801033       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 20:21:19.801068       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:21:19.871341       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 20:21:19.871455       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 20:21:19.871477       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 20:21:19.871512       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 20:21:19.880541       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:21:19.880577       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:21:19.880599       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:21:19.880606       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:21:19.982394       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:21:19.982982       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 20:21:19.983066       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 20:21:23 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:23.779087     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kntjz\" (UniqueName: \"kubernetes.io/projected/bfd5a7ab-0d4e-46ae-b4e4-c2c6aa18bcf2-kube-api-access-kntjz\") pod \"kubernetes-dashboard-855c9754f9-m9vdk\" (UID: \"bfd5a7ab-0d4e-46ae-b4e4-c2c6aa18bcf2\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m9vdk"
	Oct 09 20:21:23 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:23.779165     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bfd5a7ab-0d4e-46ae-b4e4-c2c6aa18bcf2-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-m9vdk\" (UID: \"bfd5a7ab-0d4e-46ae-b4e4-c2c6aa18bcf2\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m9vdk"
	Oct 09 20:21:23 default-k8s-diff-port-417984 kubelet[780]: W1009 20:21:23.972945     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/crio-aa4b89649b82bd6d11a7fccc9e7e276ccba8943e4f957aa3fe1034b47aed370e WatchSource:0}: Error finding container aa4b89649b82bd6d11a7fccc9e7e276ccba8943e4f957aa3fe1034b47aed370e: Status 404 returned error can't find the container with id aa4b89649b82bd6d11a7fccc9e7e276ccba8943e4f957aa3fe1034b47aed370e
	Oct 09 20:21:24 default-k8s-diff-port-417984 kubelet[780]: W1009 20:21:24.013559     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/crio-6f0e643618745e58e8d80c7c908422231a6826bf7562d47dc95f49e6961335f9 WatchSource:0}: Error finding container 6f0e643618745e58e8d80c7c908422231a6826bf7562d47dc95f49e6961335f9: Status 404 returned error can't find the container with id 6f0e643618745e58e8d80c7c908422231a6826bf7562d47dc95f49e6961335f9
	Oct 09 20:21:31 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:31.204677     780 scope.go:117] "RemoveContainer" containerID="5098a8f8cf8ba15bb53f11b9e673f9c692d7dd55166e62ba6e9f1ea8261d8fd4"
	Oct 09 20:21:32 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:32.199185     780 scope.go:117] "RemoveContainer" containerID="5098a8f8cf8ba15bb53f11b9e673f9c692d7dd55166e62ba6e9f1ea8261d8fd4"
	Oct 09 20:21:32 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:32.199480     780 scope.go:117] "RemoveContainer" containerID="fddea1af52af4101ed1f1543dc1f4e7d200ef06c2d142a5bb4f6989ab04a0a9a"
	Oct 09 20:21:32 default-k8s-diff-port-417984 kubelet[780]: E1009 20:21:32.199642     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6nw2_kubernetes-dashboard(b8855708-0929-4140-a83d-860b3040005b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6nw2" podUID="b8855708-0929-4140-a83d-860b3040005b"
	Oct 09 20:21:33 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:33.203021     780 scope.go:117] "RemoveContainer" containerID="fddea1af52af4101ed1f1543dc1f4e7d200ef06c2d142a5bb4f6989ab04a0a9a"
	Oct 09 20:21:33 default-k8s-diff-port-417984 kubelet[780]: E1009 20:21:33.203242     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6nw2_kubernetes-dashboard(b8855708-0929-4140-a83d-860b3040005b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6nw2" podUID="b8855708-0929-4140-a83d-860b3040005b"
	Oct 09 20:21:34 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:34.205476     780 scope.go:117] "RemoveContainer" containerID="fddea1af52af4101ed1f1543dc1f4e7d200ef06c2d142a5bb4f6989ab04a0a9a"
	Oct 09 20:21:34 default-k8s-diff-port-417984 kubelet[780]: E1009 20:21:34.205739     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6nw2_kubernetes-dashboard(b8855708-0929-4140-a83d-860b3040005b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6nw2" podUID="b8855708-0929-4140-a83d-860b3040005b"
	Oct 09 20:21:46 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:46.711132     780 scope.go:117] "RemoveContainer" containerID="fddea1af52af4101ed1f1543dc1f4e7d200ef06c2d142a5bb4f6989ab04a0a9a"
	Oct 09 20:21:47 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:47.245539     780 scope.go:117] "RemoveContainer" containerID="fddea1af52af4101ed1f1543dc1f4e7d200ef06c2d142a5bb4f6989ab04a0a9a"
	Oct 09 20:21:47 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:47.245914     780 scope.go:117] "RemoveContainer" containerID="b722b93e81fef15b5065babebb7c70b66d2f38666e650edc71def81153950789"
	Oct 09 20:21:47 default-k8s-diff-port-417984 kubelet[780]: E1009 20:21:47.246081     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6nw2_kubernetes-dashboard(b8855708-0929-4140-a83d-860b3040005b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6nw2" podUID="b8855708-0929-4140-a83d-860b3040005b"
	Oct 09 20:21:47 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:47.277871     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m9vdk" podStartSLOduration=10.06412707 podStartE2EDuration="24.277851453s" podCreationTimestamp="2025-10-09 20:21:23 +0000 UTC" firstStartedPulling="2025-10-09 20:21:24.018423465 +0000 UTC m=+14.800739888" lastFinishedPulling="2025-10-09 20:21:38.232147848 +0000 UTC m=+29.014464271" observedRunningTime="2025-10-09 20:21:39.240704914 +0000 UTC m=+30.023021353" watchObservedRunningTime="2025-10-09 20:21:47.277851453 +0000 UTC m=+38.060167884"
	Oct 09 20:21:51 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:51.258852     780 scope.go:117] "RemoveContainer" containerID="c88c6763c4c37baf69c511ec04150bf21aef0cc6fc5e8c7d6be66a050b424afd"
	Oct 09 20:21:53 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:53.913299     780 scope.go:117] "RemoveContainer" containerID="b722b93e81fef15b5065babebb7c70b66d2f38666e650edc71def81153950789"
	Oct 09 20:21:53 default-k8s-diff-port-417984 kubelet[780]: E1009 20:21:53.913972     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6nw2_kubernetes-dashboard(b8855708-0929-4140-a83d-860b3040005b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6nw2" podUID="b8855708-0929-4140-a83d-860b3040005b"
	Oct 09 20:22:05 default-k8s-diff-port-417984 kubelet[780]: I1009 20:22:05.712769     780 scope.go:117] "RemoveContainer" containerID="b722b93e81fef15b5065babebb7c70b66d2f38666e650edc71def81153950789"
	Oct 09 20:22:05 default-k8s-diff-port-417984 kubelet[780]: E1009 20:22:05.712941     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6nw2_kubernetes-dashboard(b8855708-0929-4140-a83d-860b3040005b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6nw2" podUID="b8855708-0929-4140-a83d-860b3040005b"
	Oct 09 20:22:08 default-k8s-diff-port-417984 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 20:22:08 default-k8s-diff-port-417984 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 20:22:08 default-k8s-diff-port-417984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [909b93cd668d2f501c2849a4db47d77b8135382f8833dc953ca0f46547198534] <==
	2025/10/09 20:21:38 Starting overwatch
	2025/10/09 20:21:38 Using namespace: kubernetes-dashboard
	2025/10/09 20:21:38 Using in-cluster config to connect to apiserver
	2025/10/09 20:21:38 Using secret token for csrf signing
	2025/10/09 20:21:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/09 20:21:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/09 20:21:38 Successful initial request to the apiserver, version: v1.34.1
	2025/10/09 20:21:38 Generating JWE encryption key
	2025/10/09 20:21:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/09 20:21:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/09 20:21:38 Initializing JWE encryption key from synchronized object
	2025/10/09 20:21:38 Creating in-cluster Sidecar client
	2025/10/09 20:21:38 Serving insecurely on HTTP port: 9090
	2025/10/09 20:21:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 20:22:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [39930a90aa1c8620eb52278dd704fbf540a2a3e4cf6848c5f5ce913ae24f805f] <==
	I1009 20:21:51.330776       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:21:51.352311       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:21:51.352362       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 20:21:51.354999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:21:54.811581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:21:59.071559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:22:02.670308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:22:05.723505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:22:08.745566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:22:08.751325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:22:08.751470       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:22:08.752701       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-417984_4a342155-4032-4b18-9d32-2297d6e007e2!
	I1009 20:22:08.752762       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3c4c04ee-1793-46fc-b5b5-7f3b1c4ca9ba", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-417984_4a342155-4032-4b18-9d32-2297d6e007e2 became leader
	W1009 20:22:08.756472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:22:08.761348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:22:08.853463       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-417984_4a342155-4032-4b18-9d32-2297d6e007e2!
	W1009 20:22:10.766568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:22:10.778816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c88c6763c4c37baf69c511ec04150bf21aef0cc6fc5e8c7d6be66a050b424afd] <==
	I1009 20:21:20.961002       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 20:21:51.043119       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-417984 -n default-k8s-diff-port-417984
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-417984 -n default-k8s-diff-port-417984: exit status 2 (475.058218ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-417984 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-417984
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-417984:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670",
	        "Created": "2025-10-09T20:19:12.869398438Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500475,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T20:20:59.998622598Z",
	            "FinishedAt": "2025-10-09T20:20:58.890170224Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/hosts",
	        "LogPath": "/var/lib/docker/containers/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670-json.log",
	        "Name": "/default-k8s-diff-port-417984",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-417984:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-417984",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670",
	                "LowerDir": "/var/lib/docker/overlay2/69199908f673e21207f723026f89b47767e510c7bef43a60d014ab9a5dff4f7d-init/diff:/var/lib/docker/overlay2/810a91395ed9b7ed2c0bbbdee8600efcf64f88722cbabc47d471235a9f901ed9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69199908f673e21207f723026f89b47767e510c7bef43a60d014ab9a5dff4f7d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69199908f673e21207f723026f89b47767e510c7bef43a60d014ab9a5dff4f7d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69199908f673e21207f723026f89b47767e510c7bef43a60d014ab9a5dff4f7d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-417984",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-417984/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-417984",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-417984",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-417984",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7f49815c2c562d899a022955dbba7724c161cb955a470a67390835f36f303efc",
	            "SandboxKey": "/var/run/docker/netns/7f49815c2c56",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-417984": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:62:fb:00:89:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "08acd2192c7aac80b9d6df51ab71eaa1736eaa95c3d16e0c4f8feb8f8a4a1db2",
	                    "EndpointID": "1f7d1bfb7257d6b58e33e4ae7ce309d90aad5e6896a7788d1f4ef7a021629616",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-417984",
	                        "1f0d0c8a230b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-417984 -n default-k8s-diff-port-417984
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-417984 -n default-k8s-diff-port-417984: exit status 2 (565.188674ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-417984 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-417984 logs -n 25: (1.703797771s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-565110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │                     │
	│ stop    │ -p embed-certs-565110 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ addons  │ enable dashboard -p embed-certs-565110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:19 UTC │
	│ start   │ -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:19 UTC │ 09 Oct 25 20:20 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-417984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-417984 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:20 UTC │
	│ image   │ embed-certs-565110 image list --format=json                                                                                                                                                                                                   │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:20 UTC │
	│ pause   │ -p embed-certs-565110 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-417984 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:20 UTC │
	│ delete  │ -p embed-certs-565110                                                                                                                                                                                                                         │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:21 UTC │
	│ start   │ -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:20 UTC │ 09 Oct 25 20:21 UTC │
	│ delete  │ -p embed-certs-565110                                                                                                                                                                                                                         │ embed-certs-565110           │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ start   │ -p newest-cni-160257 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-160257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │                     │
	│ stop    │ -p newest-cni-160257 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ addons  │ enable dashboard -p newest-cni-160257 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:21 UTC │
	│ start   │ -p newest-cni-160257 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:21 UTC │ 09 Oct 25 20:22 UTC │
	│ image   │ newest-cni-160257 image list --format=json                                                                                                                                                                                                    │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:22 UTC │ 09 Oct 25 20:22 UTC │
	│ pause   │ -p newest-cni-160257 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:22 UTC │                     │
	│ image   │ default-k8s-diff-port-417984 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:22 UTC │ 09 Oct 25 20:22 UTC │
	│ pause   │ -p default-k8s-diff-port-417984 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-417984 │ jenkins │ v1.37.0 │ 09 Oct 25 20:22 UTC │                     │
	│ delete  │ -p newest-cni-160257                                                                                                                                                                                                                          │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:22 UTC │ 09 Oct 25 20:22 UTC │
	│ delete  │ -p newest-cni-160257                                                                                                                                                                                                                          │ newest-cni-160257            │ jenkins │ v1.37.0 │ 09 Oct 25 20:22 UTC │ 09 Oct 25 20:22 UTC │
	│ start   │ -p auto-535911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-535911                  │ jenkins │ v1.37.0 │ 09 Oct 25 20:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 20:22:13
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:22:13.777488  509636 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:22:13.777656  509636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:22:13.777677  509636 out.go:374] Setting ErrFile to fd 2...
	I1009 20:22:13.777682  509636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:22:13.777971  509636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:22:13.778430  509636 out.go:368] Setting JSON to false
	I1009 20:22:13.779364  509636 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11073,"bootTime":1760030261,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:22:13.779436  509636 start.go:143] virtualization:  
	I1009 20:22:13.783325  509636 out.go:179] * [auto-535911] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:22:13.786771  509636 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:22:13.786832  509636 notify.go:221] Checking for updates...
	I1009 20:22:13.790162  509636 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:22:13.793303  509636 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:22:13.796426  509636 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:22:13.799355  509636 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:22:13.802348  509636 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> CRI-O <==
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.270477016Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=17fdd05e-b724-447a-95c7-628014c429d9 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.273432368Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=10d7aeb8-8a8c-4868-a425-dce1267d81cf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.274493846Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.285094956Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.285462132Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/614463f11f91e4c335f683125bd173f6881f31b46f8daed1f0bb30c8aa09b1b8/merged/etc/passwd: no such file or directory"
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.285560324Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/614463f11f91e4c335f683125bd173f6881f31b46f8daed1f0bb30c8aa09b1b8/merged/etc/group: no such file or directory"
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.28590153Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.312578035Z" level=info msg="Created container 39930a90aa1c8620eb52278dd704fbf540a2a3e4cf6848c5f5ce913ae24f805f: kube-system/storage-provisioner/storage-provisioner" id=10d7aeb8-8a8c-4868-a425-dce1267d81cf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.313327895Z" level=info msg="Starting container: 39930a90aa1c8620eb52278dd704fbf540a2a3e4cf6848c5f5ce913ae24f805f" id=c9d6011c-de74-4d98-b837-4df78bff8ff6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 20:21:51 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:21:51.318393411Z" level=info msg="Started container" PID=1639 containerID=39930a90aa1c8620eb52278dd704fbf540a2a3e4cf6848c5f5ce913ae24f805f description=kube-system/storage-provisioner/storage-provisioner id=c9d6011c-de74-4d98-b837-4df78bff8ff6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c516db9f25a9d303231bec39441c8899cf3f08989687192c201d09bdc234f5d
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.220096055Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.225625093Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.225672782Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.225700187Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.230490131Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.230663433Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.230742228Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.23524681Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.235289494Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.235312837Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.240437923Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.240612523Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.240703379Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.24657519Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 09 20:22:01 default-k8s-diff-port-417984 crio[654]: time="2025-10-09T20:22:01.246616389Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	39930a90aa1c8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   4c516db9f25a9       storage-provisioner                                    kube-system
	b722b93e81fef       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago       Exited              dashboard-metrics-scraper   2                   aa4b89649b82b       dashboard-metrics-scraper-6ffb444bf9-h6nw2             kubernetes-dashboard
	909b93cd668d2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago       Running             kubernetes-dashboard        0                   6f0e643618745       kubernetes-dashboard-855c9754f9-m9vdk                  kubernetes-dashboard
	8b461d773e987       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   205d9f57475be       coredns-66bc5c9577-4c2vb                               kube-system
	f93aeaa06b466       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   0a7e76f592278       busybox                                                default
	73c61df879c3d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   10b48284ffa3a       kindnet-s57gp                                          kube-system
	54c509996be49       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   a28ffe11e9dda       kube-proxy-jnlzf                                       kube-system
	c88c6763c4c37       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   4c516db9f25a9       storage-provisioner                                    kube-system
	4eeb90a44de65       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   ebca30513231d       kube-controller-manager-default-k8s-diff-port-417984   kube-system
	bef0f8b493af2       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   a9ecffa94ebcf       kube-apiserver-default-k8s-diff-port-417984            kube-system
	c867b182d5458       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   0481e52c4d63c       etcd-default-k8s-diff-port-417984                      kube-system
	a5832f172fdf4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   ef4d38d8ce3cc       kube-scheduler-default-k8s-diff-port-417984            kube-system
	
	
	==> coredns [8b461d773e987839de7e20c9cd9bb4948a0996cfcf809a7c1ad3d90725546a55] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34987 - 7557 "HINFO IN 8584361611982215924.4915455720188016414. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020362057s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-417984
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-417984
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=default-k8s-diff-port-417984
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T20_19_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 20:19:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-417984
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 20:22:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 20:21:40 +0000   Thu, 09 Oct 2025 20:19:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 20:21:40 +0000   Thu, 09 Oct 2025 20:19:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 20:21:40 +0000   Thu, 09 Oct 2025 20:19:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 20:21:40 +0000   Thu, 09 Oct 2025 20:20:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-417984
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 b322aa4b5a934aefb512ef2cf8432ce2
	  System UUID:                47844709-b89d-494e-8261-a7f5aabcecf0
	  Boot ID:                    7eb9c7d9-8be8-415a-b196-052b632584a4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-4c2vb                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m25s
	  kube-system                 etcd-default-k8s-diff-port-417984                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m30s
	  kube-system                 kindnet-s57gp                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-417984             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-417984    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-proxy-jnlzf                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-417984             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-h6nw2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-m9vdk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m22s                  kube-proxy       
	  Normal   Starting                 52s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m30s                  kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m30s                  kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m30s                  kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m30s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m26s                  node-controller  Node default-k8s-diff-port-417984 event: Registered Node default-k8s-diff-port-417984 in Controller
	  Normal   NodeReady                103s                   kubelet          Node default-k8s-diff-port-417984 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-417984 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                    node-controller  Node default-k8s-diff-port-417984 event: Registered Node default-k8s-diff-port-417984 in Controller
	
	
	==> dmesg <==
	[  +2.167003] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:51] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:52] overlayfs: idmapped layers are currently not supported
	[ +41.056229] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:53] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:54] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:55] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:57] overlayfs: idmapped layers are currently not supported
	[Oct 9 19:59] overlayfs: idmapped layers are currently not supported
	[ +30.257956] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:02] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:04] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:06] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:12] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:14] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:15] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:16] overlayfs: idmapped layers are currently not supported
	[ +23.810739] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:18] overlayfs: idmapped layers are currently not supported
	[ +26.082927] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:19] overlayfs: idmapped layers are currently not supported
	[ +21.956614] overlayfs: idmapped layers are currently not supported
	[Oct 9 20:21] overlayfs: idmapped layers are currently not supported
	[ +16.062221] overlayfs: idmapped layers are currently not supported
	[ +28.876478] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c867b182d54580a31fb8f6e96300d3d3a7d7beacfb0c84d96100f68f251ea0f6] <==
	{"level":"warn","ts":"2025-10-09T20:21:16.285352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.373515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.419879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.473193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.546888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.613417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.688052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.721684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.782596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.836487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.887141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.944150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:16.988795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.075500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.087809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.131964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.196372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.248388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.303376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.349092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.432522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.458799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.501304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.549853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T20:21:17.705202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51516","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:22:14 up  3:04,  0 user,  load average: 3.98, 3.46, 2.41
	Linux default-k8s-diff-port-417984 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [73c61df879c3dc8d5b3227ca55aa7859b8d2457ba7fbefd75e8a149cbe297d0c] <==
	I1009 20:21:20.976008       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1009 20:21:20.994307       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1009 20:21:20.994479       1 main.go:148] setting mtu 1500 for CNI 
	I1009 20:21:20.994496       1 main.go:178] kindnetd IP family: "ipv4"
	I1009 20:21:20.994511       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-09T20:21:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1009 20:21:21.217877       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1009 20:21:21.217903       1 controller.go:381] "Waiting for informer caches to sync"
	I1009 20:21:21.217913       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1009 20:21:21.218268       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1009 20:21:51.213910       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1009 20:21:51.218529       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1009 20:21:51.218645       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1009 20:21:51.218773       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1009 20:21:52.818769       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1009 20:21:52.818886       1 metrics.go:72] Registering metrics
	I1009 20:21:52.818969       1 controller.go:711] "Syncing nftables rules"
	I1009 20:22:01.219768       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 20:22:01.219823       1 main.go:301] handling current node
	I1009 20:22:11.221565       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1009 20:22:11.221605       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bef0f8b493af26a97c449506b2fb953144bf49745a3a417030e064059e7b187a] <==
	I1009 20:21:19.191789       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1009 20:21:19.191812       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 20:21:19.191888       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1009 20:21:19.191921       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 20:21:19.208135       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 20:21:19.208612       1 aggregator.go:171] initial CRD sync complete...
	I1009 20:21:19.208622       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 20:21:19.208627       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 20:21:19.208633       1 cache.go:39] Caches are synced for autoregister controller
	I1009 20:21:19.225243       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1009 20:21:19.229520       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 20:21:19.249275       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1009 20:21:19.249307       1 policy_source.go:240] refreshing policies
	I1009 20:21:19.296934       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 20:21:19.688740       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 20:21:19.924202       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:21:21.472384       1 controller.go:667] quota admission added evaluator for: namespaces
	I1009 20:21:21.603435       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 20:21:21.656213       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:21:21.691769       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:21:21.894012       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.25.142"}
	I1009 20:21:21.943199       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.118.165"}
	I1009 20:21:23.494394       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1009 20:21:23.683113       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 20:21:23.807398       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4eeb90a44de65c7aa6b10b300aa161b1c37aa94a4e93eadfd6975cbb0428c677] <==
	I1009 20:21:23.140484       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1009 20:21:23.140632       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1009 20:21:23.140883       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1009 20:21:23.141355       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 20:21:23.141427       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 20:21:23.141474       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1009 20:21:23.141505       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1009 20:21:23.141635       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1009 20:21:23.142227       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1009 20:21:23.143425       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1009 20:21:23.143437       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1009 20:21:23.143447       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 20:21:23.143454       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1009 20:21:23.143461       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 20:21:23.143469       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1009 20:21:23.143480       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1009 20:21:23.143489       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 20:21:23.147831       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 20:21:23.147909       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1009 20:21:23.152104       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1009 20:21:23.189648       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:21:23.189756       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 20:21:23.189814       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 20:21:23.213397       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 20:21:23.700214       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [54c509996be49424eefe920fa96a4572f3da6bccf14cfae32e894928a28527d1] <==
	I1009 20:21:21.422879       1 server_linux.go:53] "Using iptables proxy"
	I1009 20:21:21.567636       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 20:21:21.669189       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 20:21:21.669234       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1009 20:21:21.669300       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:21:22.357263       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 20:21:22.360417       1 server_linux.go:132] "Using iptables Proxier"
	I1009 20:21:22.433208       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:21:22.433621       1 server.go:527] "Version info" version="v1.34.1"
	I1009 20:21:22.433647       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:21:22.434874       1 config.go:200] "Starting service config controller"
	I1009 20:21:22.434897       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 20:21:22.434913       1 config.go:106] "Starting endpoint slice config controller"
	I1009 20:21:22.434917       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 20:21:22.434934       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 20:21:22.434938       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 20:21:22.441627       1 config.go:309] "Starting node config controller"
	I1009 20:21:22.441716       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 20:21:22.441753       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 20:21:22.536310       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 20:21:22.536676       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 20:21:22.536776       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a5832f172fdf43a40fddfb19a9cd192309bb7216cfb2d490b21e4a51b24a923e] <==
	I1009 20:21:16.752147       1 serving.go:386] Generated self-signed cert in-memory
	I1009 20:21:19.801033       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 20:21:19.801068       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:21:19.871341       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 20:21:19.871455       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1009 20:21:19.871477       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1009 20:21:19.871512       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 20:21:19.880541       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:21:19.880577       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:21:19.880599       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:21:19.880606       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:21:19.982394       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:21:19.982982       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1009 20:21:19.983066       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 20:21:23 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:23.779087     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kntjz\" (UniqueName: \"kubernetes.io/projected/bfd5a7ab-0d4e-46ae-b4e4-c2c6aa18bcf2-kube-api-access-kntjz\") pod \"kubernetes-dashboard-855c9754f9-m9vdk\" (UID: \"bfd5a7ab-0d4e-46ae-b4e4-c2c6aa18bcf2\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m9vdk"
	Oct 09 20:21:23 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:23.779165     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bfd5a7ab-0d4e-46ae-b4e4-c2c6aa18bcf2-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-m9vdk\" (UID: \"bfd5a7ab-0d4e-46ae-b4e4-c2c6aa18bcf2\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m9vdk"
	Oct 09 20:21:23 default-k8s-diff-port-417984 kubelet[780]: W1009 20:21:23.972945     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/crio-aa4b89649b82bd6d11a7fccc9e7e276ccba8943e4f957aa3fe1034b47aed370e WatchSource:0}: Error finding container aa4b89649b82bd6d11a7fccc9e7e276ccba8943e4f957aa3fe1034b47aed370e: Status 404 returned error can't find the container with id aa4b89649b82bd6d11a7fccc9e7e276ccba8943e4f957aa3fe1034b47aed370e
	Oct 09 20:21:24 default-k8s-diff-port-417984 kubelet[780]: W1009 20:21:24.013559     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1f0d0c8a230b788cd206633a19ec2c3f4c5347ad7d829fb182e003f40efd7670/crio-6f0e643618745e58e8d80c7c908422231a6826bf7562d47dc95f49e6961335f9 WatchSource:0}: Error finding container 6f0e643618745e58e8d80c7c908422231a6826bf7562d47dc95f49e6961335f9: Status 404 returned error can't find the container with id 6f0e643618745e58e8d80c7c908422231a6826bf7562d47dc95f49e6961335f9
	Oct 09 20:21:31 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:31.204677     780 scope.go:117] "RemoveContainer" containerID="5098a8f8cf8ba15bb53f11b9e673f9c692d7dd55166e62ba6e9f1ea8261d8fd4"
	Oct 09 20:21:32 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:32.199185     780 scope.go:117] "RemoveContainer" containerID="5098a8f8cf8ba15bb53f11b9e673f9c692d7dd55166e62ba6e9f1ea8261d8fd4"
	Oct 09 20:21:32 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:32.199480     780 scope.go:117] "RemoveContainer" containerID="fddea1af52af4101ed1f1543dc1f4e7d200ef06c2d142a5bb4f6989ab04a0a9a"
	Oct 09 20:21:32 default-k8s-diff-port-417984 kubelet[780]: E1009 20:21:32.199642     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6nw2_kubernetes-dashboard(b8855708-0929-4140-a83d-860b3040005b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6nw2" podUID="b8855708-0929-4140-a83d-860b3040005b"
	Oct 09 20:21:33 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:33.203021     780 scope.go:117] "RemoveContainer" containerID="fddea1af52af4101ed1f1543dc1f4e7d200ef06c2d142a5bb4f6989ab04a0a9a"
	Oct 09 20:21:33 default-k8s-diff-port-417984 kubelet[780]: E1009 20:21:33.203242     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6nw2_kubernetes-dashboard(b8855708-0929-4140-a83d-860b3040005b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6nw2" podUID="b8855708-0929-4140-a83d-860b3040005b"
	Oct 09 20:21:34 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:34.205476     780 scope.go:117] "RemoveContainer" containerID="fddea1af52af4101ed1f1543dc1f4e7d200ef06c2d142a5bb4f6989ab04a0a9a"
	Oct 09 20:21:34 default-k8s-diff-port-417984 kubelet[780]: E1009 20:21:34.205739     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6nw2_kubernetes-dashboard(b8855708-0929-4140-a83d-860b3040005b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6nw2" podUID="b8855708-0929-4140-a83d-860b3040005b"
	Oct 09 20:21:46 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:46.711132     780 scope.go:117] "RemoveContainer" containerID="fddea1af52af4101ed1f1543dc1f4e7d200ef06c2d142a5bb4f6989ab04a0a9a"
	Oct 09 20:21:47 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:47.245539     780 scope.go:117] "RemoveContainer" containerID="fddea1af52af4101ed1f1543dc1f4e7d200ef06c2d142a5bb4f6989ab04a0a9a"
	Oct 09 20:21:47 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:47.245914     780 scope.go:117] "RemoveContainer" containerID="b722b93e81fef15b5065babebb7c70b66d2f38666e650edc71def81153950789"
	Oct 09 20:21:47 default-k8s-diff-port-417984 kubelet[780]: E1009 20:21:47.246081     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6nw2_kubernetes-dashboard(b8855708-0929-4140-a83d-860b3040005b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6nw2" podUID="b8855708-0929-4140-a83d-860b3040005b"
	Oct 09 20:21:47 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:47.277871     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m9vdk" podStartSLOduration=10.06412707 podStartE2EDuration="24.277851453s" podCreationTimestamp="2025-10-09 20:21:23 +0000 UTC" firstStartedPulling="2025-10-09 20:21:24.018423465 +0000 UTC m=+14.800739888" lastFinishedPulling="2025-10-09 20:21:38.232147848 +0000 UTC m=+29.014464271" observedRunningTime="2025-10-09 20:21:39.240704914 +0000 UTC m=+30.023021353" watchObservedRunningTime="2025-10-09 20:21:47.277851453 +0000 UTC m=+38.060167884"
	Oct 09 20:21:51 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:51.258852     780 scope.go:117] "RemoveContainer" containerID="c88c6763c4c37baf69c511ec04150bf21aef0cc6fc5e8c7d6be66a050b424afd"
	Oct 09 20:21:53 default-k8s-diff-port-417984 kubelet[780]: I1009 20:21:53.913299     780 scope.go:117] "RemoveContainer" containerID="b722b93e81fef15b5065babebb7c70b66d2f38666e650edc71def81153950789"
	Oct 09 20:21:53 default-k8s-diff-port-417984 kubelet[780]: E1009 20:21:53.913972     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6nw2_kubernetes-dashboard(b8855708-0929-4140-a83d-860b3040005b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6nw2" podUID="b8855708-0929-4140-a83d-860b3040005b"
	Oct 09 20:22:05 default-k8s-diff-port-417984 kubelet[780]: I1009 20:22:05.712769     780 scope.go:117] "RemoveContainer" containerID="b722b93e81fef15b5065babebb7c70b66d2f38666e650edc71def81153950789"
	Oct 09 20:22:05 default-k8s-diff-port-417984 kubelet[780]: E1009 20:22:05.712941     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6nw2_kubernetes-dashboard(b8855708-0929-4140-a83d-860b3040005b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6nw2" podUID="b8855708-0929-4140-a83d-860b3040005b"
	Oct 09 20:22:08 default-k8s-diff-port-417984 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 09 20:22:08 default-k8s-diff-port-417984 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 09 20:22:08 default-k8s-diff-port-417984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [909b93cd668d2f501c2849a4db47d77b8135382f8833dc953ca0f46547198534] <==
	2025/10/09 20:21:38 Using namespace: kubernetes-dashboard
	2025/10/09 20:21:38 Using in-cluster config to connect to apiserver
	2025/10/09 20:21:38 Using secret token for csrf signing
	2025/10/09 20:21:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/09 20:21:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/09 20:21:38 Successful initial request to the apiserver, version: v1.34.1
	2025/10/09 20:21:38 Generating JWE encryption key
	2025/10/09 20:21:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/09 20:21:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/09 20:21:38 Initializing JWE encryption key from synchronized object
	2025/10/09 20:21:38 Creating in-cluster Sidecar client
	2025/10/09 20:21:38 Serving insecurely on HTTP port: 9090
	2025/10/09 20:21:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 20:22:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/09 20:21:38 Starting overwatch
	
	
	==> storage-provisioner [39930a90aa1c8620eb52278dd704fbf540a2a3e4cf6848c5f5ce913ae24f805f] <==
	I1009 20:21:51.330776       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:21:51.352311       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:21:51.352362       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1009 20:21:51.354999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:21:54.811581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:21:59.071559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:22:02.670308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:22:05.723505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:22:08.745566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:22:08.751325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:22:08.751470       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:22:08.752701       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-417984_4a342155-4032-4b18-9d32-2297d6e007e2!
	I1009 20:22:08.752762       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3c4c04ee-1793-46fc-b5b5-7f3b1c4ca9ba", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-417984_4a342155-4032-4b18-9d32-2297d6e007e2 became leader
	W1009 20:22:08.756472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:22:08.761348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1009 20:22:08.853463       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-417984_4a342155-4032-4b18-9d32-2297d6e007e2!
	W1009 20:22:10.766568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:22:10.778816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:22:12.789230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:22:12.804375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:22:14.815245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 20:22:14.820731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c88c6763c4c37baf69c511ec04150bf21aef0cc6fc5e8c7d6be66a050b424afd] <==
	I1009 20:21:20.961002       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 20:21:51.043119       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-417984 -n default-k8s-diff-port-417984
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-417984 -n default-k8s-diff-port-417984: exit status 2 (422.058517ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-417984 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.99s)
E1009 20:28:12.109483  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/flannel-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:12.118119  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/flannel-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:12.129474  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/flannel-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:12.150858  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/flannel-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:12.192287  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/flannel-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:12.273736  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/flannel-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:12.435241  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/flannel-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:12.757083  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/flannel-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:13.398563  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/flannel-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:14.680247  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/flannel-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:17.242345  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/flannel-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:19.287703  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:22.363656  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/flannel-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:32.605270  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/flannel-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:39.455996  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/auto-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:39.462556  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/auto-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:39.473988  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/auto-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:39.496126  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/auto-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:39.537552  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/auto-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:39.619040  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/auto-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:39.780546  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/auto-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:40.102225  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/auto-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:40.743634  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/auto-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:42.025655  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/auto-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (254/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 38.33
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.1/json-events 39.6
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.25
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 177.3
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 9.82
48 TestAddons/StoppedEnableDisable 12.22
49 TestCertOptions 42.48
50 TestCertExpiration 229.88
59 TestErrorSpam/setup 33.79
60 TestErrorSpam/start 0.79
61 TestErrorSpam/status 1.14
62 TestErrorSpam/pause 5.78
63 TestErrorSpam/unpause 5.74
64 TestErrorSpam/stop 1.43
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 83.53
69 TestFunctional/serial/AuditLog 0.01
70 TestFunctional/serial/SoftStart 29.27
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.49
76 TestFunctional/serial/CacheCmd/cache/add_local 1.21
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.94
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.15
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
84 TestFunctional/serial/ExtraConfig 41.28
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.48
87 TestFunctional/serial/LogsFileCmd 1.55
88 TestFunctional/serial/InvalidService 4.29
90 TestFunctional/parallel/ConfigCmd 0.45
91 TestFunctional/parallel/DashboardCmd 9.69
92 TestFunctional/parallel/DryRun 0.68
93 TestFunctional/parallel/InternationalLanguage 0.29
94 TestFunctional/parallel/StatusCmd 1.27
99 TestFunctional/parallel/AddonsCmd 0.16
100 TestFunctional/parallel/PersistentVolumeClaim 25.6
102 TestFunctional/parallel/SSHCmd 0.73
103 TestFunctional/parallel/CpCmd 2.22
105 TestFunctional/parallel/FileSync 0.33
106 TestFunctional/parallel/CertSync 2.38
110 TestFunctional/parallel/NodeLabels 0.13
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.63
114 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/Version/short 0.07
116 TestFunctional/parallel/Version/components 1.28
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
121 TestFunctional/parallel/ImageCommands/ImageBuild 4.33
122 TestFunctional/parallel/ImageCommands/Setup 0.69
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.62
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.35
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
146 TestFunctional/parallel/ProfileCmd/profile_list 0.44
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
148 TestFunctional/parallel/MountCmd/any-port 7.25
149 TestFunctional/parallel/MountCmd/specific-port 1.95
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.66
151 TestFunctional/parallel/ServiceCmd/List 0.65
152 TestFunctional/parallel/ServiceCmd/JSONOutput 0.66
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 195.63
164 TestMultiControlPlane/serial/DeployApp 6.57
165 TestMultiControlPlane/serial/PingHostFromPods 1.61
166 TestMultiControlPlane/serial/AddWorkerNode 28.82
167 TestMultiControlPlane/serial/NodeLabels 0.1
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.38
169 TestMultiControlPlane/serial/CopyFile 20.38
170 TestMultiControlPlane/serial/StopSecondaryNode 12.71
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
172 TestMultiControlPlane/serial/RestartSecondaryNode 28.21
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.18
177 TestMultiControlPlane/serial/StopCluster 23.92
178 TestMultiControlPlane/serial/RestartCluster 81.56
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.83
180 TestMultiControlPlane/serial/AddSecondaryNode 84.87
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.12
185 TestJSONOutput/start/Command 84.08
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.66
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.26
210 TestKicCustomNetwork/create_custom_network 69.83
211 TestKicCustomNetwork/use_default_bridge_network 36.2
212 TestKicExistingNetwork 35.91
213 TestKicCustomSubnet 35.51
214 TestKicStaticIP 34.75
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 75.66
219 TestMountStart/serial/StartWithMountFirst 9.89
220 TestMountStart/serial/VerifyMountFirst 0.29
221 TestMountStart/serial/StartWithMountSecond 9.29
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.64
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.22
226 TestMountStart/serial/RestartStopped 8.08
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 138.26
231 TestMultiNode/serial/DeployApp2Nodes 5.24
232 TestMultiNode/serial/PingHostFrom2Pods 1.08
233 TestMultiNode/serial/AddNode 26.65
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.71
236 TestMultiNode/serial/CopyFile 10.53
237 TestMultiNode/serial/StopNode 2.34
238 TestMultiNode/serial/StartAfterStop 8.45
239 TestMultiNode/serial/RestartKeepsNodes 78.29
240 TestMultiNode/serial/DeleteNode 5.67
241 TestMultiNode/serial/StopMultiNode 23.76
242 TestMultiNode/serial/RestartMultiNode 48.31
243 TestMultiNode/serial/ValidateNameConflict 36.43
248 TestPreload 129.42
250 TestScheduledStopUnix 108.26
253 TestInsufficientStorage 13.08
254 TestRunningBinaryUpgrade 64.81
256 TestKubernetesUpgrade 424.82
257 TestMissingContainerUpgrade 135.61
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 42.02
261 TestNoKubernetes/serial/StartWithStopK8s 16.28
262 TestNoKubernetes/serial/Start 10.05
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
264 TestNoKubernetes/serial/ProfileList 1.25
265 TestNoKubernetes/serial/Stop 1.27
266 TestNoKubernetes/serial/StartNoArgs 8.53
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
268 TestStoppedBinaryUpgrade/Setup 8.33
269 TestStoppedBinaryUpgrade/Upgrade 59.15
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.42
279 TestPause/serial/Start 80.67
280 TestPause/serial/SecondStartNoReconfiguration 26.91
289 TestNetworkPlugins/group/false 3.79
294 TestStartStop/group/old-k8s-version/serial/FirstStart 61.61
296 TestStartStop/group/no-preload/serial/FirstStart 76.06
297 TestStartStop/group/old-k8s-version/serial/DeployApp 9.55
299 TestStartStop/group/old-k8s-version/serial/Stop 13.61
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
301 TestStartStop/group/old-k8s-version/serial/SecondStart 60.32
302 TestStartStop/group/no-preload/serial/DeployApp 10.35
304 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
305 TestStartStop/group/no-preload/serial/Stop 11.9
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
307 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
310 TestStartStop/group/no-preload/serial/SecondStart 54.06
312 TestStartStop/group/embed-certs/serial/FirstStart 85.25
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
315 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.1
319 TestStartStop/group/embed-certs/serial/DeployApp 9.47
321 TestStartStop/group/embed-certs/serial/Stop 12.36
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.34
323 TestStartStop/group/embed-certs/serial/SecondStart 49.86
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.36
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.94
329 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 57
334 TestStartStop/group/newest-cni/serial/FirstStart 40.62
335 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/Stop 1.25
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
339 TestStartStop/group/newest-cni/serial/SecondStart 16.16
340 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
341 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.38
348 TestNetworkPlugins/group/auto/Start 85.03
349 TestNetworkPlugins/group/flannel/Start 51.42
350 TestNetworkPlugins/group/flannel/ControllerPod 6.01
351 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
352 TestNetworkPlugins/group/flannel/NetCatPod 11.27
353 TestNetworkPlugins/group/flannel/DNS 0.17
354 TestNetworkPlugins/group/flannel/Localhost 0.24
355 TestNetworkPlugins/group/flannel/HairPin 0.15
356 TestNetworkPlugins/group/auto/KubeletFlags 0.4
357 TestNetworkPlugins/group/auto/NetCatPod 13.39
358 TestNetworkPlugins/group/auto/DNS 0.16
359 TestNetworkPlugins/group/auto/Localhost 0.14
360 TestNetworkPlugins/group/auto/HairPin 0.14
361 TestNetworkPlugins/group/calico/Start 68.29
362 TestNetworkPlugins/group/custom-flannel/Start 65.3
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.46
365 TestNetworkPlugins/group/calico/NetCatPod 11.35
366 TestNetworkPlugins/group/calico/DNS 0.18
367 TestNetworkPlugins/group/calico/Localhost 0.14
368 TestNetworkPlugins/group/calico/HairPin 0.18
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.29
371 TestNetworkPlugins/group/custom-flannel/DNS 0.25
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
374 TestNetworkPlugins/group/kindnet/Start 89.24
375 TestNetworkPlugins/group/bridge/Start 77.2
376 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
377 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
378 TestNetworkPlugins/group/bridge/NetCatPod 12.29
379 TestNetworkPlugins/group/kindnet/KubeletFlags 0.49
380 TestNetworkPlugins/group/kindnet/NetCatPod 11.38
381 TestNetworkPlugins/group/bridge/DNS 0.16
382 TestNetworkPlugins/group/bridge/Localhost 0.13
383 TestNetworkPlugins/group/bridge/HairPin 0.13
384 TestNetworkPlugins/group/kindnet/DNS 0.17
385 TestNetworkPlugins/group/kindnet/Localhost 0.14
386 TestNetworkPlugins/group/kindnet/HairPin 0.13
387 TestNetworkPlugins/group/enable-default-cni/Start 47.45
388 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
389 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (38.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-606818 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-606818 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (38.326464437s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (38.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1009 19:00:34.760651  296002 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1009 19:00:34.760734  296002 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-606818
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-606818: exit status 85 (81.201894ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-606818 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-606818 │ jenkins │ v1.37.0 │ 09 Oct 25 18:59 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:59:56
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:59:56.478367  296007 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:59:56.478514  296007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:59:56.478530  296007 out.go:374] Setting ErrFile to fd 2...
	I1009 18:59:56.478550  296007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:59:56.478850  296007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	W1009 18:59:56.479038  296007 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21683-294150/.minikube/config/config.json: open /home/jenkins/minikube-integration/21683-294150/.minikube/config/config.json: no such file or directory
	I1009 18:59:56.479543  296007 out.go:368] Setting JSON to true
	I1009 18:59:56.480468  296007 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6136,"bootTime":1760030261,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 18:59:56.480534  296007 start.go:143] virtualization:  
	I1009 18:59:56.484889  296007 out.go:99] [download-only-606818] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1009 18:59:56.485181  296007 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball: no such file or directory
	I1009 18:59:56.485229  296007 notify.go:221] Checking for updates...
	I1009 18:59:56.488096  296007 out.go:171] MINIKUBE_LOCATION=21683
	I1009 18:59:56.491125  296007 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:59:56.494071  296007 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 18:59:56.497068  296007 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 18:59:56.499945  296007 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1009 18:59:56.505488  296007 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 18:59:56.505791  296007 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 18:59:56.538115  296007 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 18:59:56.538247  296007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:59:56.594911  296007 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-09 18:59:56.585662208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 18:59:56.595035  296007 docker.go:319] overlay module found
	I1009 18:59:56.598009  296007 out.go:99] Using the docker driver based on user configuration
	I1009 18:59:56.598052  296007 start.go:309] selected driver: docker
	I1009 18:59:56.598064  296007 start.go:930] validating driver "docker" against <nil>
	I1009 18:59:56.598166  296007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:59:56.650696  296007 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-09 18:59:56.641697464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 18:59:56.650858  296007 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 18:59:56.651166  296007 start_flags.go:411] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1009 18:59:56.651329  296007 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:59:56.654508  296007 out.go:171] Using Docker driver with root privileges
	I1009 18:59:56.657357  296007 cni.go:84] Creating CNI manager for ""
	I1009 18:59:56.657431  296007 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:59:56.657443  296007 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:59:56.657527  296007 start.go:353] cluster config:
	{Name:download-only-606818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-606818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:59:56.660485  296007 out.go:99] Starting "download-only-606818" primary control-plane node in "download-only-606818" cluster
	I1009 18:59:56.660511  296007 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 18:59:56.663309  296007 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:59:56.663340  296007 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 18:59:56.663500  296007 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:59:56.679373  296007 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 18:59:56.679576  296007 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1009 18:59:56.679684  296007 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 18:59:56.720003  296007 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1009 18:59:56.720037  296007 cache.go:58] Caching tarball of preloaded images
	I1009 18:59:56.720205  296007 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 18:59:56.723602  296007 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1009 18:59:56.723636  296007 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1009 18:59:56.816313  296007 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1009 18:59:56.816492  296007 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1009 19:00:03.305454  296007 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	
	
	* The control-plane node download-only-606818 host does not exist
	  To start a cluster, run: "minikube start -p download-only-606818"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-606818
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (39.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-214075 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-214075 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (39.596462546s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (39.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1009 19:01:14.832036  296002 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1009 19:01:14.832075  296002 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-214075
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-214075: exit status 85 (68.064225ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-606818 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-606818 │ jenkins │ v1.37.0 │ 09 Oct 25 18:59 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 09 Oct 25 19:00 UTC │ 09 Oct 25 19:00 UTC │
	│ delete  │ -p download-only-606818                                                                                                                                                   │ download-only-606818 │ jenkins │ v1.37.0 │ 09 Oct 25 19:00 UTC │ 09 Oct 25 19:00 UTC │
	│ start   │ -o=json --download-only -p download-only-214075 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-214075 │ jenkins │ v1.37.0 │ 09 Oct 25 19:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:00:35
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:00:35.282939  296210 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:00:35.283125  296210 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:00:35.283160  296210 out.go:374] Setting ErrFile to fd 2...
	I1009 19:00:35.283182  296210 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:00:35.283478  296210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:00:35.284014  296210 out.go:368] Setting JSON to true
	I1009 19:00:35.284996  296210 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6175,"bootTime":1760030261,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 19:00:35.285100  296210 start.go:143] virtualization:  
	I1009 19:00:35.288762  296210 out.go:99] [download-only-214075] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:00:35.289011  296210 notify.go:221] Checking for updates...
	I1009 19:00:35.291938  296210 out.go:171] MINIKUBE_LOCATION=21683
	I1009 19:00:35.294940  296210 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:00:35.297863  296210 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:00:35.300823  296210 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 19:00:35.303710  296210 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1009 19:00:35.309271  296210 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 19:00:35.309569  296210 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:00:35.332046  296210 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:00:35.332153  296210 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:00:35.387289  296210 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-09 19:00:35.3779731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:
/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:00:35.387394  296210 docker.go:319] overlay module found
	I1009 19:00:35.390566  296210 out.go:99] Using the docker driver based on user configuration
	I1009 19:00:35.390607  296210 start.go:309] selected driver: docker
	I1009 19:00:35.390615  296210 start.go:930] validating driver "docker" against <nil>
	I1009 19:00:35.390716  296210 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:00:35.443965  296210 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-09 19:00:35.435009774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:00:35.444153  296210 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 19:00:35.444436  296210 start_flags.go:411] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1009 19:00:35.444594  296210 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 19:00:35.447758  296210 out.go:171] Using Docker driver with root privileges
	I1009 19:00:35.450511  296210 cni.go:84] Creating CNI manager for ""
	I1009 19:00:35.450618  296210 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:00:35.450634  296210 start_flags.go:337] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:00:35.450730  296210 start.go:353] cluster config:
	{Name:download-only-214075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-214075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:00:35.453699  296210 out.go:99] Starting "download-only-214075" primary control-plane node in "download-only-214075" cluster
	I1009 19:00:35.453725  296210 cache.go:123] Beginning downloading kic base image for docker with crio
	I1009 19:00:35.456517  296210 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:00:35.456546  296210 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:00:35.456721  296210 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:00:35.472723  296210 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 19:00:35.472876  296210 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1009 19:00:35.472896  296210 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1009 19:00:35.472901  296210 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1009 19:00:35.472909  296210 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1009 19:00:35.513794  296210 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1009 19:00:35.513823  296210 cache.go:58] Caching tarball of preloaded images
	I1009 19:00:35.513997  296210 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:00:35.517134  296210 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1009 19:00:35.517169  296210 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1009 19:00:35.598063  296210 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1009 19:00:35.598119  296210 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21683-294150/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-214075 host does not exist
	  To start a cluster, run: "minikube start -p download-only-214075"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-214075
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1009 19:01:15.973204  296002 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-719553 --alsologtostderr --binary-mirror http://127.0.0.1:39775 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-719553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-719553
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-999657
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-999657: exit status 85 (65.563323ms)

                                                
                                                
-- stdout --
	* Profile "addons-999657" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-999657"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-999657
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-999657: exit status 85 (69.872808ms)

                                                
                                                
-- stdout --
	* Profile "addons-999657" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-999657"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (177.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-999657 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-999657 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m57.297570133s)
--- PASS: TestAddons/Setup (177.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-999657 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-999657 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.82s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-999657 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-999657 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ca3e136e-233d-4e14-a69e-e23a77e22510] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ca3e136e-233d-4e14-a69e-e23a77e22510] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003527302s
addons_test.go:694: (dbg) Run:  kubectl --context addons-999657 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-999657 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-999657 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-999657 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.82s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-999657
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-999657: (11.922950573s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-999657
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-999657
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-999657
--- PASS: TestAddons/StoppedEnableDisable (12.22s)

                                                
                                    
x
+
TestCertOptions (42.48s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-038875 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-038875 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (39.743060376s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-038875 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-038875 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-038875 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-038875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-038875
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-038875: (1.995256604s)
--- PASS: TestCertOptions (42.48s)

                                                
                                    
x
+
TestCertExpiration (229.88s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-282540 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-282540 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (29.891166039s)
E1009 20:14:14.731380  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-282540 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1009 20:16:05.048685  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-282540 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.548613987s)
helpers_test.go:175: Cleaning up "cert-expiration-282540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-282540
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-282540: (2.441328268s)
--- PASS: TestCertExpiration (229.88s)

                                                
                                    
x
+
TestErrorSpam/setup (33.79s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-963414 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-963414 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-963414 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-963414 --driver=docker  --container-runtime=crio: (33.791737082s)
--- PASS: TestErrorSpam/setup (33.79s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (5.78s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 pause: exit status 80 (1.916852883s)

                                                
                                                
-- stdout --
	* Pausing node nospam-963414 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:08:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 pause: exit status 80 (1.529465009s)

                                                
                                                
-- stdout --
	* Pausing node nospam-963414 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:08:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 pause: exit status 80 (2.326658697s)

                                                
                                                
-- stdout --
	* Pausing node nospam-963414 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:08:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.78s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 unpause: exit status 80 (1.391044096s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-963414 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:08:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 unpause: exit status 80 (2.001898027s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-963414 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:08:17Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 unpause: exit status 80 (2.342840523s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-963414 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-09T19:08:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.74s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 stop: (1.218665112s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-963414 --log_dir /tmp/nospam-963414 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21683-294150/.minikube/files/etc/test/nested/copy/296002/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.53s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-326957 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1009 19:09:14.734586  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:14.740946  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:14.752661  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:14.774107  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:14.815599  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:14.897133  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:15.058788  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:15.380670  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:16.022485  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:17.304995  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:19.866356  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:24.987778  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:09:35.229945  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-326957 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m23.532027071s)
--- PASS: TestFunctional/serial/StartWithProxy (83.53s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.01s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1009 19:09:49.587329  296002 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-326957 --alsologtostderr -v=8
E1009 19:09:55.712050  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-326957 --alsologtostderr -v=8: (29.26649011s)
functional_test.go:678: soft start took 29.27063073s for "functional-326957" cluster.
I1009 19:10:18.854158  296002 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (29.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-326957 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-326957 cache add registry.k8s.io/pause:3.1: (1.172863354s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-326957 cache add registry.k8s.io/pause:3.3: (1.171367453s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-326957 cache add registry.k8s.io/pause:latest: (1.14315715s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-326957 /tmp/TestFunctionalserialCacheCmdcacheadd_local608386216/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 cache add minikube-local-cache-test:functional-326957
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 cache delete minikube-local-cache-test:functional-326957
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-326957
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-326957 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (304.886409ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 kubectl -- --context functional-326957 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-326957 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-326957 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1009 19:10:36.675359  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-326957 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.276021258s)
functional_test.go:776: restart took 41.276128899s for "functional-326957" cluster.
I1009 19:11:07.763064  296002 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (41.28s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-326957 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-326957 logs: (1.477692835s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 logs --file /tmp/TestFunctionalserialLogsFileCmd895346651/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-326957 logs --file /tmp/TestFunctionalserialLogsFileCmd895346651/001/logs.txt: (1.550188531s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-326957 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-326957
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-326957: exit status 115 (386.101798ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30298 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-326957 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-326957 config get cpus: exit status 14 (63.89569ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-326957 config get cpus: exit status 14 (83.480675ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-326957 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-326957 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 324316: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.69s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-326957 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-326957 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (287.136829ms)

                                                
                                                
-- stdout --
	* [functional-326957] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:21:50.658476  323742 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:21:50.658903  323742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:21:50.658914  323742 out.go:374] Setting ErrFile to fd 2...
	I1009 19:21:50.658919  323742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:21:50.659209  323742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:21:50.659610  323742 out.go:368] Setting JSON to false
	I1009 19:21:50.660570  323742 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7450,"bootTime":1760030261,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 19:21:50.660681  323742 start.go:143] virtualization:  
	I1009 19:21:50.664149  323742 out.go:179] * [functional-326957] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 19:21:50.667451  323742 notify.go:221] Checking for updates...
	I1009 19:21:50.668084  323742 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:21:50.671019  323742 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:21:50.673817  323742 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:21:50.676799  323742 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 19:21:50.679723  323742 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:21:50.683077  323742 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:21:50.686763  323742 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:21:50.689008  323742 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:21:50.739164  323742 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:21:50.739355  323742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:21:50.840809  323742 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:21:50.82278639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:21:50.840936  323742 docker.go:319] overlay module found
	I1009 19:21:50.845292  323742 out.go:179] * Using the docker driver based on existing profile
	I1009 19:21:50.848277  323742 start.go:309] selected driver: docker
	I1009 19:21:50.848297  323742 start.go:930] validating driver "docker" against &{Name:functional-326957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-326957 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:21:50.848397  323742 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:21:50.852557  323742 out.go:203] 
	W1009 19:21:50.859471  323742 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1009 19:21:50.864577  323742 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-326957 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-326957 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-326957 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (284.927592ms)

                                                
                                                
-- stdout --
	* [functional-326957] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:21:50.384945  323648 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:21:50.385069  323648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:21:50.385078  323648 out.go:374] Setting ErrFile to fd 2...
	I1009 19:21:50.385083  323648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:21:50.386116  323648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:21:50.386580  323648 out.go:368] Setting JSON to false
	I1009 19:21:50.387468  323648 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7450,"bootTime":1760030261,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 19:21:50.387549  323648 start.go:143] virtualization:  
	I1009 19:21:50.390966  323648 out.go:179] * [functional-326957] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1009 19:21:50.395119  323648 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:21:50.395479  323648 notify.go:221] Checking for updates...
	I1009 19:21:50.408518  323648 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:21:50.410572  323648 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 19:21:50.413500  323648 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 19:21:50.416550  323648 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:21:50.420297  323648 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:21:50.425943  323648 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:21:50.426868  323648 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:21:50.473857  323648 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 19:21:50.473981  323648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:21:50.563632  323648 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 19:21:50.552558493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:21:50.563732  323648 docker.go:319] overlay module found
	I1009 19:21:50.566843  323648 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1009 19:21:50.569900  323648 start.go:309] selected driver: docker
	I1009 19:21:50.569923  323648 start.go:930] validating driver "docker" against &{Name:functional-326957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-326957 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:21:50.570026  323648 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:21:50.573469  323648 out.go:203] 
	W1009 19:21:50.576447  323648 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 19:21:50.579466  323648 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [4695b93b-e3b5-40a3-907a-12bc7bb678f9] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003581447s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-326957 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-326957 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-326957 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-326957 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ea9cb02e-e70d-44c5-b15f-5afe1acebad6] Pending
helpers_test.go:352: "sp-pod" [ea9cb02e-e70d-44c5-b15f-5afe1acebad6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ea9cb02e-e70d-44c5-b15f-5afe1acebad6] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003103989s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-326957 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-326957 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-326957 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e8a9601c-f062-4928-9fda-f1bb68806108] Pending
helpers_test.go:352: "sp-pod" [e8a9601c-f062-4928-9fda-f1bb68806108] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00522581s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-326957 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.60s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh -n functional-326957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 cp functional-326957:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1678562008/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh -n functional-326957 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh -n functional-326957 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/296002/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "sudo cat /etc/test/nested/copy/296002/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/296002.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "sudo cat /etc/ssl/certs/296002.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/296002.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "sudo cat /usr/share/ca-certificates/296002.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2960022.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "sudo cat /etc/ssl/certs/2960022.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2960022.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "sudo cat /usr/share/ca-certificates/2960022.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-326957 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-326957 ssh "sudo systemctl is-active docker": exit status 1 (328.233361ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-326957 ssh "sudo systemctl is-active containerd": exit status 1 (304.20032ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-326957 version -o=json --components: (1.278140972s)
--- PASS: TestFunctional/parallel/Version/components (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-326957 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ latest             │ e35ad067421cc │ 184MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-326957 image ls --format table --alsologtostderr:
I1009 19:22:01.385692  325153 out.go:360] Setting OutFile to fd 1 ...
I1009 19:22:01.385921  325153 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:22:01.385961  325153 out.go:374] Setting ErrFile to fd 2...
I1009 19:22:01.385999  325153 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:22:01.386387  325153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
I1009 19:22:01.387297  325153 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:22:01.387529  325153 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:22:01.388227  325153 cli_runner.go:164] Run: docker container inspect functional-326957 --format={{.State.Status}}
I1009 19:22:01.416728  325153 ssh_runner.go:195] Run: systemctl --version
I1009 19:22:01.416809  325153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326957
I1009 19:22:01.435599  325153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/functional-326957/id_rsa Username:docker}
I1009 19:22:01.539903  325153 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-326957 image ls --format json --alsologtostderr:
[{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["regi
stry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7
517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"8057e0500773a37cde2cff041eb13ebd
68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.
k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9","repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6","docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a"],"repoTags":["docker.io/library/nginx:latest"],"size":"184136558"},{"id":"1611c
d07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-326957 image ls --format json --alsologtostderr:
I1009 19:21:59.416293  324916 out.go:360] Setting OutFile to fd 1 ...
I1009 19:21:59.416460  324916 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:21:59.416483  324916 out.go:374] Setting ErrFile to fd 2...
I1009 19:21:59.416503  324916 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:21:59.416890  324916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
I1009 19:21:59.418370  324916 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:21:59.418577  324916 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:21:59.419108  324916 cli_runner.go:164] Run: docker container inspect functional-326957 --format={{.State.Status}}
I1009 19:21:59.441636  324916 ssh_runner.go:195] Run: systemctl --version
I1009 19:21:59.441691  324916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326957
I1009 19:21:59.462695  324916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/functional-326957/id_rsa Username:docker}
I1009 19:21:59.576260  324916 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-326957 image ls --format yaml --alsologtostderr:
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
- docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a
repoTags:
- docker.io/library/nginx:latest
size: "184136558"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags: []
size: "42263767"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-326957 image ls --format yaml --alsologtostderr:
I1009 19:22:01.042302  325104 out.go:360] Setting OutFile to fd 1 ...
I1009 19:22:01.042457  325104 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:22:01.042464  325104 out.go:374] Setting ErrFile to fd 2...
I1009 19:22:01.042469  325104 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:22:01.042725  325104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
I1009 19:22:01.043373  325104 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:22:01.043493  325104 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:22:01.043959  325104 cli_runner.go:164] Run: docker container inspect functional-326957 --format={{.State.Status}}
I1009 19:22:01.073923  325104 ssh_runner.go:195] Run: systemctl --version
I1009 19:22:01.073988  325104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326957
I1009 19:22:01.092621  325104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/functional-326957/id_rsa Username:docker}
I1009 19:22:01.200627  325104 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-326957 ssh pgrep buildkitd: exit status 1 (428.700575ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 image build -t localhost/my-image:functional-326957 testdata/build --alsologtostderr
2025/10/09 19:22:00 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-326957 image build -t localhost/my-image:functional-326957 testdata/build --alsologtostderr: (3.660266604s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-326957 image build -t localhost/my-image:functional-326957 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 04337cacbf6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-326957
--> 1e83e13ebe9
Successfully tagged localhost/my-image:functional-326957
1e83e13ebe9a30f925c62686a3d7c5568eafa033c2249d6bf75441698e63a224
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-326957 image build -t localhost/my-image:functional-326957 testdata/build --alsologtostderr:
I1009 19:22:00.327577  325040 out.go:360] Setting OutFile to fd 1 ...
I1009 19:22:00.328789  325040 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:22:00.328858  325040 out.go:374] Setting ErrFile to fd 2...
I1009 19:22:00.328880  325040 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 19:22:00.329289  325040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
I1009 19:22:00.330159  325040 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:22:00.331094  325040 config.go:182] Loaded profile config "functional-326957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 19:22:00.331685  325040 cli_runner.go:164] Run: docker container inspect functional-326957 --format={{.State.Status}}
I1009 19:22:00.360415  325040 ssh_runner.go:195] Run: systemctl --version
I1009 19:22:00.360485  325040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326957
I1009 19:22:00.391548  325040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/functional-326957/id_rsa Username:docker}
I1009 19:22:00.510138  325040 build_images.go:161] Building image from path: /tmp/build.874839254.tar
I1009 19:22:00.510296  325040 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1009 19:22:00.522092  325040 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.874839254.tar
I1009 19:22:00.529581  325040 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.874839254.tar: stat -c "%s %y" /var/lib/minikube/build/build.874839254.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.874839254.tar': No such file or directory
I1009 19:22:00.529676  325040 ssh_runner.go:362] scp /tmp/build.874839254.tar --> /var/lib/minikube/build/build.874839254.tar (3072 bytes)
I1009 19:22:00.561194  325040 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.874839254
I1009 19:22:00.572659  325040 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.874839254 -xf /var/lib/minikube/build/build.874839254.tar
I1009 19:22:00.583924  325040 crio.go:315] Building image: /var/lib/minikube/build/build.874839254
I1009 19:22:00.584056  325040 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-326957 /var/lib/minikube/build/build.874839254 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1009 19:22:03.684337  325040 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-326957 /var/lib/minikube/build/build.874839254 --cgroup-manager=cgroupfs: (3.100228134s)
I1009 19:22:03.684408  325040 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.874839254
I1009 19:22:03.693006  325040 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.874839254.tar
I1009 19:22:03.701359  325040 build_images.go:217] Built localhost/my-image:functional-326957 from /tmp/build.874839254.tar
I1009 19:22:03.701391  325040 build_images.go:133] succeeded building to: functional-326957
I1009 19:22:03.701397  325040 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-326957
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 image rm kicbase/echo-server:functional-326957 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-326957 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-326957 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-326957 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 319763: os: process already finished
helpers_test.go:525: unable to kill pid 319623: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-326957 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-326957 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-326957 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [fe353cbd-23bb-47ed-a742-e54d34e2584d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [fe353cbd-23bb-47ed-a742-e54d34e2584d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.0033808s
I1009 19:11:32.891812  296002 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-326957 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.36.24 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-326957 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "371.323713ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "64.026622ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "381.952339ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "59.620983ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-326957 /tmp/TestFunctionalparallelMountCmdany-port1748214414/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760037698176040568" to /tmp/TestFunctionalparallelMountCmdany-port1748214414/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760037698176040568" to /tmp/TestFunctionalparallelMountCmdany-port1748214414/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760037698176040568" to /tmp/TestFunctionalparallelMountCmdany-port1748214414/001/test-1760037698176040568
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-326957 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (376.716163ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 19:21:38.553814  296002 retry.go:31] will retry after 614.640645ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  9 19:21 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  9 19:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  9 19:21 test-1760037698176040568
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh cat /mount-9p/test-1760037698176040568
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-326957 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [9663e3c9-8e0a-40b1-9cdb-f34b1ee8e14c] Pending
helpers_test.go:352: "busybox-mount" [9663e3c9-8e0a-40b1-9cdb-f34b1ee8e14c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [9663e3c9-8e0a-40b1-9cdb-f34b1ee8e14c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [9663e3c9-8e0a-40b1-9cdb-f34b1ee8e14c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003818408s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-326957 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-326957 /tmp/TestFunctionalparallelMountCmdany-port1748214414/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-326957 /tmp/TestFunctionalparallelMountCmdspecific-port2615587817/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-326957 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (417.347212ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 19:21:45.842088  296002 retry.go:31] will retry after 465.319486ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-326957 /tmp/TestFunctionalparallelMountCmdspecific-port2615587817/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-326957 ssh "sudo umount -f /mount-9p": exit status 1 (300.886515ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-326957 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-326957 /tmp/TestFunctionalparallelMountCmdspecific-port2615587817/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-326957 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3147697970/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-326957 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3147697970/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-326957 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3147697970/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-326957 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-326957 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3147697970/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-326957 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3147697970/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-326957 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3147697970/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-326957 service list -o json
functional_test.go:1504: Took "659.574119ms" to run "out/minikube-linux-arm64 -p functional-326957 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-326957
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-326957
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-326957
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (195.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1009 19:24:14.730861  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-807463 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m14.744940005s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (195.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-807463 kubectl -- rollout status deployment/busybox: (3.799251818s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- exec busybox-7b57f96db7-5z2cl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- exec busybox-7b57f96db7-99qlt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- exec busybox-7b57f96db7-xqc7g -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- exec busybox-7b57f96db7-5z2cl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- exec busybox-7b57f96db7-99qlt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- exec busybox-7b57f96db7-xqc7g -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- exec busybox-7b57f96db7-5z2cl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- exec busybox-7b57f96db7-99qlt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- exec busybox-7b57f96db7-xqc7g -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- exec busybox-7b57f96db7-5z2cl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- exec busybox-7b57f96db7-5z2cl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- exec busybox-7b57f96db7-99qlt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- exec busybox-7b57f96db7-99qlt -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- exec busybox-7b57f96db7-xqc7g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 kubectl -- exec busybox-7b57f96db7-xqc7g -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (28.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 node add --alsologtostderr -v 5
E1009 19:25:37.801645  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-807463 node add --alsologtostderr -v 5: (27.720138785s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-807463 status --alsologtostderr -v 5: (1.095295481s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (28.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-807463 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.37724635s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-807463 status --output json --alsologtostderr -v 5: (1.078951351s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp testdata/cp-test.txt ha-807463:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp ha-807463:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1218422779/001/cp-test_ha-807463.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp ha-807463:/home/docker/cp-test.txt ha-807463-m02:/home/docker/cp-test_ha-807463_ha-807463-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m02 "sudo cat /home/docker/cp-test_ha-807463_ha-807463-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp ha-807463:/home/docker/cp-test.txt ha-807463-m03:/home/docker/cp-test_ha-807463_ha-807463-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m03 "sudo cat /home/docker/cp-test_ha-807463_ha-807463-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp ha-807463:/home/docker/cp-test.txt ha-807463-m04:/home/docker/cp-test_ha-807463_ha-807463-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m04 "sudo cat /home/docker/cp-test_ha-807463_ha-807463-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp testdata/cp-test.txt ha-807463-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp ha-807463-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1218422779/001/cp-test_ha-807463-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp ha-807463-m02:/home/docker/cp-test.txt ha-807463:/home/docker/cp-test_ha-807463-m02_ha-807463.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463 "sudo cat /home/docker/cp-test_ha-807463-m02_ha-807463.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp ha-807463-m02:/home/docker/cp-test.txt ha-807463-m03:/home/docker/cp-test_ha-807463-m02_ha-807463-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m03 "sudo cat /home/docker/cp-test_ha-807463-m02_ha-807463-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp ha-807463-m02:/home/docker/cp-test.txt ha-807463-m04:/home/docker/cp-test_ha-807463-m02_ha-807463-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m04 "sudo cat /home/docker/cp-test_ha-807463-m02_ha-807463-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp testdata/cp-test.txt ha-807463-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp ha-807463-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1218422779/001/cp-test_ha-807463-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp ha-807463-m03:/home/docker/cp-test.txt ha-807463:/home/docker/cp-test_ha-807463-m03_ha-807463.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463 "sudo cat /home/docker/cp-test_ha-807463-m03_ha-807463.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp ha-807463-m03:/home/docker/cp-test.txt ha-807463-m02:/home/docker/cp-test_ha-807463-m03_ha-807463-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m02 "sudo cat /home/docker/cp-test_ha-807463-m03_ha-807463-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp ha-807463-m03:/home/docker/cp-test.txt ha-807463-m04:/home/docker/cp-test_ha-807463-m03_ha-807463-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m04 "sudo cat /home/docker/cp-test_ha-807463-m03_ha-807463-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp testdata/cp-test.txt ha-807463-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp ha-807463-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1218422779/001/cp-test_ha-807463-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp ha-807463-m04:/home/docker/cp-test.txt ha-807463:/home/docker/cp-test_ha-807463-m04_ha-807463.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463 "sudo cat /home/docker/cp-test_ha-807463-m04_ha-807463.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp ha-807463-m04:/home/docker/cp-test.txt ha-807463-m02:/home/docker/cp-test_ha-807463-m04_ha-807463-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m02 "sudo cat /home/docker/cp-test_ha-807463-m04_ha-807463-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 cp ha-807463-m04:/home/docker/cp-test.txt ha-807463-m03:/home/docker/cp-test_ha-807463-m04_ha-807463-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 ssh -n ha-807463-m03 "sudo cat /home/docker/cp-test_ha-807463-m04_ha-807463-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 node stop m02 --alsologtostderr -v 5
E1009 19:26:21.973541  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:26:21.979999  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:26:21.991477  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:26:22.012957  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:26:22.054367  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:26:22.135728  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:26:22.297268  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:26:22.619165  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:26:23.261054  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:26:24.542493  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:26:27.104699  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:26:32.226876  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-807463 node stop m02 --alsologtostderr -v 5: (11.916318998s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-807463 status --alsologtostderr -v 5: exit status 7 (789.360225ms)

                                                
                                                
-- stdout --
	ha-807463
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-807463-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-807463-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-807463-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:26:33.150765  339900 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:26:33.150956  339900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:26:33.150979  339900 out.go:374] Setting ErrFile to fd 2...
	I1009 19:26:33.150990  339900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:26:33.151321  339900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:26:33.151573  339900 out.go:368] Setting JSON to false
	I1009 19:26:33.151616  339900 mustload.go:65] Loading cluster: ha-807463
	I1009 19:26:33.151702  339900 notify.go:221] Checking for updates...
	I1009 19:26:33.152834  339900 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:26:33.152863  339900 status.go:174] checking status of ha-807463 ...
	I1009 19:26:33.153660  339900 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:26:33.173974  339900 status.go:371] ha-807463 host status = "Running" (err=<nil>)
	I1009 19:26:33.174001  339900 host.go:66] Checking if "ha-807463" exists ...
	I1009 19:26:33.174324  339900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463
	I1009 19:26:33.200247  339900 host.go:66] Checking if "ha-807463" exists ...
	I1009 19:26:33.200553  339900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:26:33.200603  339900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463
	I1009 19:26:33.226841  339900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463/id_rsa Username:docker}
	I1009 19:26:33.330772  339900 ssh_runner.go:195] Run: systemctl --version
	I1009 19:26:33.337639  339900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:26:33.351623  339900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:26:33.412249  339900 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-09 19:26:33.400718772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:26:33.412944  339900 kubeconfig.go:125] found "ha-807463" server: "https://192.168.49.254:8443"
	I1009 19:26:33.412990  339900 api_server.go:166] Checking apiserver status ...
	I1009 19:26:33.413051  339900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:26:33.425907  339900 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1227/cgroup
	I1009 19:26:33.435018  339900 api_server.go:182] apiserver freezer: "12:freezer:/docker/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/crio/crio-00eed2942da2e040b9e5c77915b13a1322cf18ae8389f5e9cdc46ce68342ca03"
	I1009 19:26:33.435102  339900 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fea8f67be9d437ba2c81eace43786d004b06e1b2b690c3ab6f02d448677e04b6/crio/crio-00eed2942da2e040b9e5c77915b13a1322cf18ae8389f5e9cdc46ce68342ca03/freezer.state
	I1009 19:26:33.445676  339900 api_server.go:204] freezer state: "THAWED"
	I1009 19:26:33.445703  339900 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1009 19:26:33.457495  339900 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1009 19:26:33.457527  339900 status.go:463] ha-807463 apiserver status = Running (err=<nil>)
	I1009 19:26:33.457539  339900 status.go:176] ha-807463 status: &{Name:ha-807463 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:26:33.457572  339900 status.go:174] checking status of ha-807463-m02 ...
	I1009 19:26:33.457873  339900 cli_runner.go:164] Run: docker container inspect ha-807463-m02 --format={{.State.Status}}
	I1009 19:26:33.475585  339900 status.go:371] ha-807463-m02 host status = "Stopped" (err=<nil>)
	I1009 19:26:33.475613  339900 status.go:384] host is not running, skipping remaining checks
	I1009 19:26:33.475620  339900 status.go:176] ha-807463-m02 status: &{Name:ha-807463-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:26:33.475641  339900 status.go:174] checking status of ha-807463-m03 ...
	I1009 19:26:33.475965  339900 cli_runner.go:164] Run: docker container inspect ha-807463-m03 --format={{.State.Status}}
	I1009 19:26:33.497096  339900 status.go:371] ha-807463-m03 host status = "Running" (err=<nil>)
	I1009 19:26:33.497161  339900 host.go:66] Checking if "ha-807463-m03" exists ...
	I1009 19:26:33.497518  339900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m03
	I1009 19:26:33.515515  339900 host.go:66] Checking if "ha-807463-m03" exists ...
	I1009 19:26:33.515829  339900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:26:33.515872  339900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m03
	I1009 19:26:33.534377  339900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m03/id_rsa Username:docker}
	I1009 19:26:33.638961  339900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:26:33.652712  339900 kubeconfig.go:125] found "ha-807463" server: "https://192.168.49.254:8443"
	I1009 19:26:33.652777  339900 api_server.go:166] Checking apiserver status ...
	I1009 19:26:33.652835  339900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:26:33.665294  339900 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	I1009 19:26:33.674685  339900 api_server.go:182] apiserver freezer: "12:freezer:/docker/a4088edd48f101e4e5b27257363177713d82fd45431e72e73cabd5ff25e3bcba/crio/crio-bfecc909051217dda1ce4ffe31fd64af76a5cbb3724922ebc5c1da1be22bcdcd"
	I1009 19:26:33.674765  339900 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a4088edd48f101e4e5b27257363177713d82fd45431e72e73cabd5ff25e3bcba/crio/crio-bfecc909051217dda1ce4ffe31fd64af76a5cbb3724922ebc5c1da1be22bcdcd/freezer.state
	I1009 19:26:33.684354  339900 api_server.go:204] freezer state: "THAWED"
	I1009 19:26:33.684385  339900 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1009 19:26:33.693008  339900 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1009 19:26:33.693086  339900 status.go:463] ha-807463-m03 apiserver status = Running (err=<nil>)
	I1009 19:26:33.693153  339900 status.go:176] ha-807463-m03 status: &{Name:ha-807463-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:26:33.693190  339900 status.go:174] checking status of ha-807463-m04 ...
	I1009 19:26:33.693587  339900 cli_runner.go:164] Run: docker container inspect ha-807463-m04 --format={{.State.Status}}
	I1009 19:26:33.718815  339900 status.go:371] ha-807463-m04 host status = "Running" (err=<nil>)
	I1009 19:26:33.718842  339900 host.go:66] Checking if "ha-807463-m04" exists ...
	I1009 19:26:33.719160  339900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-807463-m04
	I1009 19:26:33.741857  339900 host.go:66] Checking if "ha-807463-m04" exists ...
	I1009 19:26:33.742162  339900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:26:33.742211  339900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-807463-m04
	I1009 19:26:33.762158  339900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/ha-807463-m04/id_rsa Username:docker}
	I1009 19:26:33.866929  339900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:26:33.886027  339900 status.go:176] ha-807463-m04 status: &{Name:ha-807463-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (28.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 node start m02 --alsologtostderr -v 5
E1009 19:26:42.468730  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-807463 node start m02 --alsologtostderr -v 5: (27.037883221s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-807463 status --alsologtostderr -v 5: (1.066867867s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (28.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1009 19:27:02.950727  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.181774803s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (23.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 stop --alsologtostderr -v 5
E1009 19:36:21.978082  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-807463 stop --alsologtostderr -v 5: (23.804030751s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-807463 status --alsologtostderr -v 5: exit status 7 (114.913722ms)

                                                
                                                
-- stdout --
	ha-807463
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-807463-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-807463-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:36:35.490458  350909 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:36:35.490575  350909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:36:35.490587  350909 out.go:374] Setting ErrFile to fd 2...
	I1009 19:36:35.490592  350909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:36:35.490836  350909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:36:35.491042  350909 out.go:368] Setting JSON to false
	I1009 19:36:35.491089  350909 mustload.go:65] Loading cluster: ha-807463
	I1009 19:36:35.491150  350909 notify.go:221] Checking for updates...
	I1009 19:36:35.492432  350909 config.go:182] Loaded profile config "ha-807463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:36:35.492459  350909 status.go:174] checking status of ha-807463 ...
	I1009 19:36:35.493203  350909 cli_runner.go:164] Run: docker container inspect ha-807463 --format={{.State.Status}}
	I1009 19:36:35.511165  350909 status.go:371] ha-807463 host status = "Stopped" (err=<nil>)
	I1009 19:36:35.511201  350909 status.go:384] host is not running, skipping remaining checks
	I1009 19:36:35.511210  350909 status.go:176] ha-807463 status: &{Name:ha-807463 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:36:35.511250  350909 status.go:174] checking status of ha-807463-m02 ...
	I1009 19:36:35.511558  350909 cli_runner.go:164] Run: docker container inspect ha-807463-m02 --format={{.State.Status}}
	I1009 19:36:35.530743  350909 status.go:371] ha-807463-m02 host status = "Stopped" (err=<nil>)
	I1009 19:36:35.530769  350909 status.go:384] host is not running, skipping remaining checks
	I1009 19:36:35.530781  350909 status.go:176] ha-807463-m02 status: &{Name:ha-807463-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:36:35.530809  350909 status.go:174] checking status of ha-807463-m04 ...
	I1009 19:36:35.531105  350909 cli_runner.go:164] Run: docker container inspect ha-807463-m04 --format={{.State.Status}}
	I1009 19:36:35.553960  350909 status.go:371] ha-807463-m04 host status = "Stopped" (err=<nil>)
	I1009 19:36:35.553986  350909 status.go:384] host is not running, skipping remaining checks
	I1009 19:36:35.553994  350909 status.go:176] ha-807463-m04 status: &{Name:ha-807463-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (23.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (81.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-807463 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m20.553361048s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (81.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 node add --control-plane --alsologtostderr -v 5
E1009 19:39:14.731365  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-807463 node add --control-plane --alsologtostderr -v 5: (1m23.700562138s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-807463 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-807463 status --alsologtostderr -v 5: (1.166559016s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.120150841s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.12s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.08s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-389165 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-389165 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m24.081669217s)
--- PASS: TestJSONOutput/start/Command (84.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.66s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-389165 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-389165 --output=json --user=testUser: (5.654970572s)
--- PASS: TestJSONOutput/stop/Command (5.66s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-642167 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-642167 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (96.060224ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9f657696-606e-4085-9936-603c9705e328","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-642167] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6564d2a3-7fa8-4496-a3c4-8dfdcd11cf2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"e89bbde7-5384-443d-9290-2fa4c3390407","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ca4632cd-8909-4065-93a8-274cae726108","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig"}}
	{"specversion":"1.0","id":"45dde452-72d4-4653-9704-5b37827bc5da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube"}}
	{"specversion":"1.0","id":"8819b26a-5e06-4d61-9d1b-0cf0e71cc178","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b1c9cb4d-67c3-4153-9f2d-6306bd16ad27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"735a6920-6086-49ff-a84e-751962964df2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-642167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-642167
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (69.83s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-882619 --network=
E1009 19:41:21.973605  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:42:17.805264  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-882619 --network=: (1m7.684432195s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-882619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-882619
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-882619: (2.120642776s)
--- PASS: TestKicCustomNetwork/create_custom_network (69.83s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-378639 --network=bridge
E1009 19:42:45.045420  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-378639 --network=bridge: (33.868518919s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-378639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-378639
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-378639: (2.30612809s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.20s)

                                                
                                    
x
+
TestKicExistingNetwork (35.91s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1009 19:43:00.690827  296002 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1009 19:43:00.710029  296002 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1009 19:43:00.710827  296002 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1009 19:43:00.710870  296002 cli_runner.go:164] Run: docker network inspect existing-network
W1009 19:43:00.726155  296002 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1009 19:43:00.726190  296002 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1009 19:43:00.726205  296002 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1009 19:43:00.726324  296002 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1009 19:43:00.742187  296002 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3847a6577684 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:b5:e6:7d:c7:ad} reservation:<nil>}
I1009 19:43:00.742514  296002 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002281340}
I1009 19:43:00.742536  296002 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1009 19:43:00.742585  296002 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1009 19:43:00.801404  296002 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-428737 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-428737 --network=existing-network: (33.745815336s)
helpers_test.go:175: Cleaning up "existing-network-428737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-428737
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-428737: (2.015217569s)
I1009 19:43:36.580950  296002 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.91s)

                                                
                                    
x
+
TestKicCustomSubnet (35.51s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-688765 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-688765 --subnet=192.168.60.0/24: (33.35491749s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-688765 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-688765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-688765
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-688765: (2.130974449s)
--- PASS: TestKicCustomSubnet (35.51s)

                                                
                                    
x
+
TestKicStaticIP (34.75s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-805337 --static-ip=192.168.200.200
E1009 19:44:14.730695  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-805337 --static-ip=192.168.200.200: (32.424830775s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-805337 ip
helpers_test.go:175: Cleaning up "static-ip-805337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-805337
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-805337: (2.158512969s)
--- PASS: TestKicStaticIP (34.75s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (75.66s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-951964 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-951964 --driver=docker  --container-runtime=crio: (33.397043137s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-954630 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-954630 --driver=docker  --container-runtime=crio: (36.598711193s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-951964
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-954630
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-954630" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-954630
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-954630: (2.147888244s)
helpers_test.go:175: Cleaning up "first-951964" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-951964
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-951964: (1.985607131s)
--- PASS: TestMinikubeProfile (75.66s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-300428 --memory=3072 --mount-string /tmp/TestMountStartserial2534955225/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-300428 --memory=3072 --mount-string /tmp/TestMountStartserial2534955225/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.891362977s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-300428 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-302643 --memory=3072 --mount-string /tmp/TestMountStartserial2534955225/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-302643 --memory=3072 --mount-string /tmp/TestMountStartserial2534955225/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.287920773s)
E1009 19:46:21.975148  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMountStart/serial/StartWithMountSecond (9.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-302643 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-300428 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-300428 --alsologtostderr -v=5: (1.642741436s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-302643 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-302643
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-302643: (1.220911166s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.08s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-302643
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-302643: (7.083238709s)
--- PASS: TestMountStart/serial/RestartStopped (8.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-302643 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (138.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-920456 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-920456 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m17.716985468s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (138.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-920456 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-920456 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-920456 -- rollout status deployment/busybox: (3.404983568s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-920456 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-920456 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-920456 -- exec busybox-7b57f96db7-pxr9x -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-920456 -- exec busybox-7b57f96db7-qjxl6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-920456 -- exec busybox-7b57f96db7-pxr9x -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-920456 -- exec busybox-7b57f96db7-qjxl6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-920456 -- exec busybox-7b57f96db7-pxr9x -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-920456 -- exec busybox-7b57f96db7-qjxl6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-920456 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-920456 -- exec busybox-7b57f96db7-pxr9x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-920456 -- exec busybox-7b57f96db7-pxr9x -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-920456 -- exec busybox-7b57f96db7-qjxl6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-920456 -- exec busybox-7b57f96db7-qjxl6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-920456 -v=5 --alsologtostderr
E1009 19:49:14.730410  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-920456 -v=5 --alsologtostderr: (25.93297325s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.65s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-920456 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 cp testdata/cp-test.txt multinode-920456:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 cp multinode-920456:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2350833424/001/cp-test_multinode-920456.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 cp multinode-920456:/home/docker/cp-test.txt multinode-920456-m02:/home/docker/cp-test_multinode-920456_multinode-920456-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456-m02 "sudo cat /home/docker/cp-test_multinode-920456_multinode-920456-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 cp multinode-920456:/home/docker/cp-test.txt multinode-920456-m03:/home/docker/cp-test_multinode-920456_multinode-920456-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456-m03 "sudo cat /home/docker/cp-test_multinode-920456_multinode-920456-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 cp testdata/cp-test.txt multinode-920456-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 cp multinode-920456-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2350833424/001/cp-test_multinode-920456-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 cp multinode-920456-m02:/home/docker/cp-test.txt multinode-920456:/home/docker/cp-test_multinode-920456-m02_multinode-920456.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456 "sudo cat /home/docker/cp-test_multinode-920456-m02_multinode-920456.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 cp multinode-920456-m02:/home/docker/cp-test.txt multinode-920456-m03:/home/docker/cp-test_multinode-920456-m02_multinode-920456-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456-m03 "sudo cat /home/docker/cp-test_multinode-920456-m02_multinode-920456-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 cp testdata/cp-test.txt multinode-920456-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 cp multinode-920456-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2350833424/001/cp-test_multinode-920456-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 cp multinode-920456-m03:/home/docker/cp-test.txt multinode-920456:/home/docker/cp-test_multinode-920456-m03_multinode-920456.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456 "sudo cat /home/docker/cp-test_multinode-920456-m03_multinode-920456.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 cp multinode-920456-m03:/home/docker/cp-test.txt multinode-920456-m02:/home/docker/cp-test_multinode-920456-m03_multinode-920456-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 ssh -n multinode-920456-m02 "sudo cat /home/docker/cp-test_multinode-920456-m03_multinode-920456-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-920456 node stop m03: (1.222675273s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-920456 status: exit status 7 (558.126921ms)

                                                
                                                
-- stdout --
	multinode-920456
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-920456-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-920456-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-920456 status --alsologtostderr: exit status 7 (554.628654ms)

                                                
                                                
-- stdout --
	multinode-920456
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-920456-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-920456-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:49:39.993932  401620 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:49:39.994052  401620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:49:39.994057  401620 out.go:374] Setting ErrFile to fd 2...
	I1009 19:49:39.994061  401620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:49:39.994296  401620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:49:39.994479  401620 out.go:368] Setting JSON to false
	I1009 19:49:39.994498  401620 mustload.go:65] Loading cluster: multinode-920456
	I1009 19:49:39.994868  401620 config.go:182] Loaded profile config "multinode-920456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:49:39.994947  401620 status.go:174] checking status of multinode-920456 ...
	I1009 19:49:39.994994  401620 notify.go:221] Checking for updates...
	I1009 19:49:39.995500  401620 cli_runner.go:164] Run: docker container inspect multinode-920456 --format={{.State.Status}}
	I1009 19:49:40.021476  401620 status.go:371] multinode-920456 host status = "Running" (err=<nil>)
	I1009 19:49:40.021503  401620 host.go:66] Checking if "multinode-920456" exists ...
	I1009 19:49:40.021875  401620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-920456
	I1009 19:49:40.053715  401620 host.go:66] Checking if "multinode-920456" exists ...
	I1009 19:49:40.054038  401620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:49:40.054095  401620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-920456
	I1009 19:49:40.074548  401620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33271 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/multinode-920456/id_rsa Username:docker}
	I1009 19:49:40.174787  401620 ssh_runner.go:195] Run: systemctl --version
	I1009 19:49:40.183147  401620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:49:40.197813  401620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:49:40.260076  401620 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-09 19:49:40.250151794 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 19:49:40.260635  401620 kubeconfig.go:125] found "multinode-920456" server: "https://192.168.67.2:8443"
	I1009 19:49:40.260663  401620 api_server.go:166] Checking apiserver status ...
	I1009 19:49:40.260704  401620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:49:40.273548  401620 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1230/cgroup
	I1009 19:49:40.283250  401620 api_server.go:182] apiserver freezer: "12:freezer:/docker/c8981d32dc9f608fbc9aefbbd7a9db317f411a414caaaed35d5791e4ac127d44/crio/crio-01700723859db5550afea4241032fb63ae55badf7f1f7554b5dc6cfea1b4fe92"
	I1009 19:49:40.283313  401620 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c8981d32dc9f608fbc9aefbbd7a9db317f411a414caaaed35d5791e4ac127d44/crio/crio-01700723859db5550afea4241032fb63ae55badf7f1f7554b5dc6cfea1b4fe92/freezer.state
	I1009 19:49:40.291536  401620 api_server.go:204] freezer state: "THAWED"
	I1009 19:49:40.291568  401620 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1009 19:49:40.300818  401620 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1009 19:49:40.300853  401620 status.go:463] multinode-920456 apiserver status = Running (err=<nil>)
	I1009 19:49:40.300864  401620 status.go:176] multinode-920456 status: &{Name:multinode-920456 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:49:40.300888  401620 status.go:174] checking status of multinode-920456-m02 ...
	I1009 19:49:40.301317  401620 cli_runner.go:164] Run: docker container inspect multinode-920456-m02 --format={{.State.Status}}
	I1009 19:49:40.318749  401620 status.go:371] multinode-920456-m02 host status = "Running" (err=<nil>)
	I1009 19:49:40.318776  401620 host.go:66] Checking if "multinode-920456-m02" exists ...
	I1009 19:49:40.319080  401620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-920456-m02
	I1009 19:49:40.335638  401620 host.go:66] Checking if "multinode-920456-m02" exists ...
	I1009 19:49:40.335948  401620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:49:40.335992  401620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-920456-m02
	I1009 19:49:40.358201  401620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33276 SSHKeyPath:/home/jenkins/minikube-integration/21683-294150/.minikube/machines/multinode-920456-m02/id_rsa Username:docker}
	I1009 19:49:40.458448  401620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:49:40.471626  401620 status.go:176] multinode-920456-m02 status: &{Name:multinode-920456-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:49:40.471662  401620 status.go:174] checking status of multinode-920456-m03 ...
	I1009 19:49:40.471976  401620 cli_runner.go:164] Run: docker container inspect multinode-920456-m03 --format={{.State.Status}}
	I1009 19:49:40.489216  401620 status.go:371] multinode-920456-m03 host status = "Stopped" (err=<nil>)
	I1009 19:49:40.489252  401620 status.go:384] host is not running, skipping remaining checks
	I1009 19:49:40.489259  401620 status.go:176] multinode-920456-m03 status: &{Name:multinode-920456-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-920456 node start m03 -v=5 --alsologtostderr: (7.670002922s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-920456
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-920456
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-920456: (24.789038002s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-920456 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-920456 --wait=true -v=5 --alsologtostderr: (53.348453761s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-920456
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-920456 node delete m03: (4.978241837s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 stop
E1009 19:51:21.979606  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-920456 stop: (23.561380421s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-920456 status: exit status 7 (95.125519ms)

                                                
                                                
-- stdout --
	multinode-920456
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-920456-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-920456 status --alsologtostderr: exit status 7 (98.692691ms)

                                                
                                                
-- stdout --
	multinode-920456
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-920456-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:51:36.611451  409356 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:51:36.611573  409356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:51:36.611584  409356 out.go:374] Setting ErrFile to fd 2...
	I1009 19:51:36.611590  409356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:51:36.611883  409356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 19:51:36.612086  409356 out.go:368] Setting JSON to false
	I1009 19:51:36.612129  409356 mustload.go:65] Loading cluster: multinode-920456
	I1009 19:51:36.612189  409356 notify.go:221] Checking for updates...
	I1009 19:51:36.613162  409356 config.go:182] Loaded profile config "multinode-920456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:51:36.613187  409356 status.go:174] checking status of multinode-920456 ...
	I1009 19:51:36.613728  409356 cli_runner.go:164] Run: docker container inspect multinode-920456 --format={{.State.Status}}
	I1009 19:51:36.634060  409356 status.go:371] multinode-920456 host status = "Stopped" (err=<nil>)
	I1009 19:51:36.634086  409356 status.go:384] host is not running, skipping remaining checks
	I1009 19:51:36.634094  409356 status.go:176] multinode-920456 status: &{Name:multinode-920456 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:51:36.634131  409356 status.go:174] checking status of multinode-920456-m02 ...
	I1009 19:51:36.634446  409356 cli_runner.go:164] Run: docker container inspect multinode-920456-m02 --format={{.State.Status}}
	I1009 19:51:36.659383  409356 status.go:371] multinode-920456-m02 host status = "Stopped" (err=<nil>)
	I1009 19:51:36.659406  409356 status.go:384] host is not running, skipping remaining checks
	I1009 19:51:36.659423  409356 status.go:176] multinode-920456-m02 status: &{Name:multinode-920456-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-920456 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-920456 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (47.609903454s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-920456 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.31s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-920456
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-920456-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-920456-m02 --driver=docker  --container-runtime=crio: exit status 14 (94.95913ms)

                                                
                                                
-- stdout --
	* [multinode-920456-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-920456-m02' is duplicated with machine name 'multinode-920456-m02' in profile 'multinode-920456'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-920456-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-920456-m03 --driver=docker  --container-runtime=crio: (33.973961069s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-920456
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-920456: exit status 80 (358.789805ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-920456 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-920456-m03 already exists in multinode-920456-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-920456-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-920456-m03: (1.938162863s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.43s)

                                                
                                    
x
+
TestPreload (129.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-621106 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-621106 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m5.349820093s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-621106 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-621106 image pull gcr.io/k8s-minikube/busybox: (2.090688031s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-621106
E1009 19:54:14.730778  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-621106: (5.778344015s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-621106 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-621106 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (53.595740775s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-621106 image list
helpers_test.go:175: Cleaning up "test-preload-621106" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-621106
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-621106: (2.354087391s)
--- PASS: TestPreload (129.42s)

                                                
                                    
x
+
TestScheduledStopUnix (108.26s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-204889 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-204889 --memory=3072 --driver=docker  --container-runtime=crio: (32.473313967s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-204889 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-204889 -n scheduled-stop-204889
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-204889 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1009 19:55:47.886905  296002 retry.go:31] will retry after 79.165µs: open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/scheduled-stop-204889/pid: no such file or directory
I1009 19:55:47.888082  296002 retry.go:31] will retry after 76.817µs: open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/scheduled-stop-204889/pid: no such file or directory
I1009 19:55:47.889192  296002 retry.go:31] will retry after 255.094µs: open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/scheduled-stop-204889/pid: no such file or directory
I1009 19:55:47.890279  296002 retry.go:31] will retry after 415.504µs: open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/scheduled-stop-204889/pid: no such file or directory
I1009 19:55:47.891415  296002 retry.go:31] will retry after 625.004µs: open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/scheduled-stop-204889/pid: no such file or directory
I1009 19:55:47.892515  296002 retry.go:31] will retry after 567.004µs: open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/scheduled-stop-204889/pid: no such file or directory
I1009 19:55:47.893645  296002 retry.go:31] will retry after 1.659283ms: open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/scheduled-stop-204889/pid: no such file or directory
I1009 19:55:47.895854  296002 retry.go:31] will retry after 1.179629ms: open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/scheduled-stop-204889/pid: no such file or directory
I1009 19:55:47.898058  296002 retry.go:31] will retry after 1.828885ms: open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/scheduled-stop-204889/pid: no such file or directory
I1009 19:55:47.900280  296002 retry.go:31] will retry after 4.863351ms: open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/scheduled-stop-204889/pid: no such file or directory
I1009 19:55:47.906075  296002 retry.go:31] will retry after 7.976871ms: open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/scheduled-stop-204889/pid: no such file or directory
I1009 19:55:47.914387  296002 retry.go:31] will retry after 7.494388ms: open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/scheduled-stop-204889/pid: no such file or directory
I1009 19:55:47.922613  296002 retry.go:31] will retry after 7.836534ms: open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/scheduled-stop-204889/pid: no such file or directory
I1009 19:55:47.930845  296002 retry.go:31] will retry after 14.006956ms: open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/scheduled-stop-204889/pid: no such file or directory
I1009 19:55:47.945014  296002 retry.go:31] will retry after 19.006224ms: open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/scheduled-stop-204889/pid: no such file or directory
I1009 19:55:47.964242  296002 retry.go:31] will retry after 28.517543ms: open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/scheduled-stop-204889/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-204889 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-204889 -n scheduled-stop-204889
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-204889
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-204889 --schedule 15s
E1009 19:56:21.973790  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-204889
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-204889: exit status 7 (74.067155ms)

                                                
                                                
-- stdout --
	scheduled-stop-204889
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-204889 -n scheduled-stop-204889
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-204889 -n scheduled-stop-204889: exit status 7 (71.192399ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-204889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-204889
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-204889: (4.198920684s)
--- PASS: TestScheduledStopUnix (108.26s)

                                                
                                    
x
+
TestInsufficientStorage (13.08s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-763296 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-763296 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.534719724s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d90d8e23-ac2d-4204-ad6b-78ae7219ca62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-763296] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f8c00fd1-79aa-4a52-9b97-dd4bbe37b78b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"e61d79ee-683c-412d-8da0-7460ba92f0c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"64fb9766-cde0-4b01-a737-4e5b29b31544","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig"}}
	{"specversion":"1.0","id":"592e9a31-6c2f-4a56-9399-799d19fd29e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube"}}
	{"specversion":"1.0","id":"3bdc3a67-c6c6-42a5-af23-ffe4db8015a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"458a763f-9741-409a-b1de-1b303d1e0b9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"08fea46c-e6e6-4563-b2d9-cc9a3cf08d3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8c08fbb2-9fa8-48b2-b5ec-06c9c668f666","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8b854829-00b9-4708-94b8-0557674c9890","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5cb19edb-51eb-4967-8d43-6bd8d9699e9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5bd44ef7-f402-4626-8af7-5b1910379738","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-763296\" primary control-plane node in \"insufficient-storage-763296\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"35f86dd9-eb9d-4307-a2bf-d9e6d1b66a93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759745255-21703 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fbb9df95-b2e7-41f3-89a3-12584956af34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"96f49fdf-650f-49a3-8c2f-9a444be3b89d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-763296 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-763296 --output=json --layout=cluster: exit status 7 (295.307517ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-763296","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-763296","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:57:13.950306  425518 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-763296" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-763296 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-763296 --output=json --layout=cluster: exit status 7 (307.358869ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-763296","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-763296","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:57:14.261397  425584 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-763296" does not appear in /home/jenkins/minikube-integration/21683-294150/kubeconfig
	E1009 19:57:14.271662  425584 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/insufficient-storage-763296/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-763296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-763296
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-763296: (1.939951398s)
--- PASS: TestInsufficientStorage (13.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (64.81s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.654860395 start -p running-upgrade-055303 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1009 20:01:21.974309  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.654860395 start -p running-upgrade-055303 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.3150712s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-055303 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-055303 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.897841925s)
helpers_test.go:175: Cleaning up "running-upgrade-055303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-055303
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-055303: (2.067551769s)
--- PASS: TestRunningBinaryUpgrade (64.81s)

                                                
                                    
x
+
TestKubernetesUpgrade (424.82s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-164946 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-164946 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.58138591s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-164946
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-164946: (1.311316354s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-164946 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-164946 status --format={{.Host}}: exit status 7 (129.265808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-164946 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1009 19:59:25.046729  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-164946 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m37.642519114s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-164946 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-164946 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-164946 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (122.348247ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-164946] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-164946
	    minikube start -p kubernetes-upgrade-164946 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1649462 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-164946 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-164946 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1009 20:04:14.731583  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-164946 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m42.848295537s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-164946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-164946
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-164946: (2.057113279s)
--- PASS: TestKubernetesUpgrade (424.82s)

                                                
                                    
x
+
TestMissingContainerUpgrade (135.61s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.599838092 start -p missing-upgrade-917803 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.599838092 start -p missing-upgrade-917803 --memory=3072 --driver=docker  --container-runtime=crio: (1m15.964814249s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-917803
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-917803: (1.61670581s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-917803
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-917803 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1009 19:58:57.807236  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:59:14.731025  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-917803 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.529152226s)
helpers_test.go:175: Cleaning up "missing-upgrade-917803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-917803
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-917803: (2.908543924s)
--- PASS: TestMissingContainerUpgrade (135.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-965213 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-965213 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (95.347163ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-965213] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-965213 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-965213 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.612493382s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-965213 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-965213 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-965213 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (13.860774891s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-965213 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-965213 status -o json: exit status 2 (382.722437ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-965213","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-965213
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-965213: (2.034171443s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-965213 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-965213 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (10.044938052s)
--- PASS: TestNoKubernetes/serial/Start (10.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-965213 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-965213 "sudo systemctl is-active --quiet service kubelet": exit status 1 (366.642566ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-965213
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-965213: (1.266579559s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-965213 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-965213 --driver=docker  --container-runtime=crio: (8.525201894s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-965213 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-965213 "sudo systemctl is-active --quiet service kubelet": exit status 1 (290.502544ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (8.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (8.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (59.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1584013747 start -p stopped-upgrade-265052 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1584013747 start -p stopped-upgrade-265052 --memory=3072 --vm-driver=docker  --container-runtime=crio: (38.470837495s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1584013747 -p stopped-upgrade-265052 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1584013747 -p stopped-upgrade-265052 stop: (1.248747015s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-265052 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-265052 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.424664541s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (59.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-265052
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-265052: (1.416554514s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                    
x
+
TestPause/serial/Start (80.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-383163 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-383163 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m20.6665763s)
--- PASS: TestPause/serial/Start (80.67s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.91s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-383163 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-383163 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.894749321s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (26.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-535911 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-535911 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (188.58212ms)

                                                
                                                
-- stdout --
	* [false-535911] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 20:05:46.924078  463224 out.go:360] Setting OutFile to fd 1 ...
	I1009 20:05:46.924269  463224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:05:46.924307  463224 out.go:374] Setting ErrFile to fd 2...
	I1009 20:05:46.924322  463224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 20:05:46.924642  463224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-294150/.minikube/bin
	I1009 20:05:46.925176  463224 out.go:368] Setting JSON to false
	I1009 20:05:46.926218  463224 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10086,"bootTime":1760030261,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 20:05:46.926290  463224 start.go:143] virtualization:  
	I1009 20:05:46.930096  463224 out.go:179] * [false-535911] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1009 20:05:46.933883  463224 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 20:05:46.933962  463224 notify.go:221] Checking for updates...
	I1009 20:05:46.939651  463224 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:05:46.942712  463224 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-294150/kubeconfig
	I1009 20:05:46.945591  463224 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-294150/.minikube
	I1009 20:05:46.948457  463224 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 20:05:46.951370  463224 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:05:46.954795  463224 config.go:182] Loaded profile config "force-systemd-flag-736218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 20:05:46.954951  463224 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 20:05:46.982742  463224 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1009 20:05:46.982880  463224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 20:05:47.046900  463224 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-09 20:05:47.037486116 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1009 20:05:47.047018  463224 docker.go:319] overlay module found
	I1009 20:05:47.050088  463224 out.go:179] * Using the docker driver based on user configuration
	I1009 20:05:47.052864  463224 start.go:309] selected driver: docker
	I1009 20:05:47.052881  463224 start.go:930] validating driver "docker" against <nil>
	I1009 20:05:47.052895  463224 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:05:47.056528  463224 out.go:203] 
	W1009 20:05:47.059757  463224 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1009 20:05:47.062581  463224 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-535911 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-535911

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-535911

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-535911

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-535911

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-535911

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-535911

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-535911

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-535911

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-535911

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-535911

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-535911

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-535911" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-535911" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-535911

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-535911"

                                                
                                                
----------------------- debugLogs end: false-535911 [took: 3.448980243s] --------------------------------
helpers_test.go:175: Cleaning up "false-535911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-535911
--- PASS: TestNetworkPlugins/group/false (3.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1009 20:15:37.811193  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m1.605794819s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m16.06284768s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-670649 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0b2ec2bb-1c4e-4a74-9583-a369e03ce9b9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0b2ec2bb-1c4e-4a74-9583-a369e03ce9b9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003907527s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-670649 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-670649 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-670649 --alsologtostderr -v=3: (13.6117246s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-670649 -n old-k8s-version-670649
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-670649 -n old-k8s-version-670649: exit status 7 (101.288505ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-670649 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (60.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-670649 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (59.789035912s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-670649 -n old-k8s-version-670649
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (60.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-020313 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bee2655f-729a-4600-b1e4-939eef3e8e2b] Pending
helpers_test.go:352: "busybox" [bee2655f-729a-4600-b1e4-939eef3e8e2b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bee2655f-729a-4600-b1e4-939eef3e8e2b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003847011s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-020313 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pv4kt" [a62b0cc0-36f9-44f2-96c7-87aed1665f8d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003861146s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-020313 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-020313 --alsologtostderr -v=3: (11.898145102s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pv4kt" [a62b0cc0-36f9-44f2-96c7-87aed1665f8d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00366561s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-670649 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-670649 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-020313 -n no-preload-020313
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-020313 -n no-preload-020313: exit status 7 (77.916024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-020313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (54.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-020313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.59160522s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-020313 -n no-preload-020313
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (54.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m25.250042045s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-46jtk" [ffc02df7-a011-4dff-a92d-b4705e05953c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003526035s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-46jtk" [ffc02df7-a011-4dff-a92d-b4705e05953c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002975404s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-020313 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-020313 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1009 20:19:14.731083  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m28.09907366s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-565110 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [dd2912f1-74cf-4ef4-86cf-f321b48ea8d9] Pending
helpers_test.go:352: "busybox" [dd2912f1-74cf-4ef4-86cf-f321b48ea8d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [dd2912f1-74cf-4ef4-86cf-f321b48ea8d9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00643474s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-565110 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-565110 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-565110 --alsologtostderr -v=3: (12.358710554s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-565110 -n embed-certs-565110
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-565110 -n embed-certs-565110: exit status 7 (132.185669ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-565110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-565110 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.489876906s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-565110 -n embed-certs-565110
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-417984 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [dab2e635-8f2d-4a44-9384-70a522687435] Pending
helpers_test.go:352: "busybox" [dab2e635-8f2d-4a44-9384-70a522687435] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [dab2e635-8f2d-4a44-9384-70a522687435] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004035347s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-417984 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f7ckg" [def1cb05-75eb-47cd-8733-e75e6c64ee66] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003353816s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f7ckg" [def1cb05-75eb-47cd-8733-e75e6c64ee66] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003410122s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-565110 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-417984 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-417984 --alsologtostderr -v=3: (11.941641952s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-565110 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-417984 -n default-k8s-diff-port-417984
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-417984 -n default-k8s-diff-port-417984: exit status 7 (101.444279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-417984 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-417984 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (56.437465216s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-417984 -n default-k8s-diff-port-417984
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-160257 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1009 20:21:10.949316  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:21:10.955668  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:21:10.967030  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:21:10.988400  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:21:11.029692  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:21:11.112481  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:21:11.275899  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:21:11.602183  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:21:12.243898  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:21:13.525776  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:21:16.087853  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:21:21.209579  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:21:21.974025  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:21:31.451549  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-160257 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.619940377s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-160257 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-160257 --alsologtostderr -v=3: (1.252465969s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-160257 -n newest-cni-160257
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-160257 -n newest-cni-160257: exit status 7 (77.620961ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-160257 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-160257 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1009 20:21:51.933238  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-160257 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.695282692s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-160257 -n newest-cni-160257
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m9vdk" [bfd5a7ab-0d4e-46ae-b4e4-c2c6aa18bcf2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003572723s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m9vdk" [bfd5a7ab-0d4e-46ae-b4e4-c2c6aa18bcf2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004014715s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-417984 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-160257 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-417984 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-535911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-535911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m25.03165894s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-535911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1009 20:22:26.590224  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:22:26.596948  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:22:26.608331  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:22:26.630228  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:22:26.672073  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:22:26.757144  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:22:26.922676  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:22:27.244344  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:22:27.886340  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:22:29.168574  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:22:31.730174  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:22:32.894540  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:22:36.852147  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:22:47.093511  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:23:07.575122  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-535911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (51.416955317s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-sp7w5" [cb44af96-3164-475c-9074-9543c500d506] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003362129s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-535911 "pgrep -a kubelet"
I1009 20:23:18.429291  296002 config.go:182] Loaded profile config "flannel-535911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-535911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vmktf" [da84b5cb-7da8-4391-a6ca-e4d438f1748e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vmktf" [da84b5cb-7da8-4391-a6ca-e4d438f1748e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003438805s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-535911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-535911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-535911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-535911 "pgrep -a kubelet"
I1009 20:23:39.102224  296002 config.go:182] Loaded profile config "auto-535911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-535911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-szgjt" [0aac0729-7215-4e6b-bd3d-ed1922b1fd0d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-szgjt" [0aac0729-7215-4e6b-bd3d-ed1922b1fd0d] Running
E1009 20:23:48.537352  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.00356933s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-535911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-535911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-535911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-535911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1009 20:23:54.815799  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:24:14.730691  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/addons-999657/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-535911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m8.288136101s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-535911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-535911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m5.3028576s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-l82v7" [6205383d-097c-4cab-a29a-8878ea9750d0] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005357271s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-535911 "pgrep -a kubelet"
I1009 20:25:08.946461  296002 config.go:182] Loaded profile config "calico-535911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-535911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mw8wc" [19aae072-6c63-4650-aa36-6ed608545c99] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1009 20:25:10.458947  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-mw8wc" [19aae072-6c63-4650-aa36-6ed608545c99] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00460111s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-535911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-535911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-535911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-535911 "pgrep -a kubelet"
I1009 20:25:23.293271  296002 config.go:182] Loaded profile config "custom-flannel-535911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-535911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nn7km" [c283c7c7-9952-4eb8-b431-b9429ed976e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nn7km" [c283c7c7-9952-4eb8-b431-b9429ed976e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004154999s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-535911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-535911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-535911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-535911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1009 20:25:45.680226  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:25:55.922880  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-535911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m29.23863114s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-535911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1009 20:26:10.949293  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:26:16.404856  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:26:21.974000  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/functional-326957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:26:38.657533  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/old-k8s-version-670649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:26:57.366380  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/default-k8s-diff-port-417984/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-535911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m17.19556627s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-tcj6h" [d72183dd-70eb-4b3a-8fdf-f2aad38b6260] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004577285s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-535911 "pgrep -a kubelet"
I1009 20:27:18.271023  296002 config.go:182] Loaded profile config "bridge-535911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-535911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hxmtv" [f4d23d2d-8a88-4df4-9805-8834fcc86710] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hxmtv" [f4d23d2d-8a88-4df4-9805-8834fcc86710] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.00389011s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-535911 "pgrep -a kubelet"
I1009 20:27:20.268213  296002 config.go:182] Loaded profile config "kindnet-535911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-535911 replace --force -f testdata/netcat-deployment.yaml
I1009 20:27:20.618138  296002 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-n899d" [b4680022-112f-4617-b9e5-95ada8b8d16c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-n899d" [b4680022-112f-4617-b9e5-95ada8b8d16c] Running
E1009 20:27:26.590506  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/no-preload-020313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003251773s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-535911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-535911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-535911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-535911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-535911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-535911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (47.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-535911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-535911 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (47.453296726s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (47.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-535911 "pgrep -a kubelet"
I1009 20:28:43.563436  296002 config.go:182] Loaded profile config "enable-default-cni-535911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-535911 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vgrt5" [d9a10cbc-48c6-4037-8af8-8f8b763c46ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1009 20:28:44.587582  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/auto-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-vgrt5" [d9a10cbc-48c6-4037-8af8-8f8b763c46ea] Running
E1009 20:28:49.709727  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/auto-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 20:28:53.087525  296002 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-294150/.minikube/profiles/flannel-535911/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003311472s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-535911 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-535911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-535911 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-847696 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-847696" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-847696
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-613966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-613966
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-535911 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-535911

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-535911

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-535911

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-535911

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-535911

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-535911

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-535911

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-535911

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-535911

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-535911

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-535911

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-535911" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-535911" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-535911

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-535911"

                                                
                                                
----------------------- debugLogs end: kubenet-535911 [took: 3.751577209s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-535911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-535911
--- SKIP: TestNetworkPlugins/group/kubenet (3.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-535911 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-535911

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-535911

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-535911

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-535911

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-535911

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-535911

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-535911

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-535911

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-535911

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-535911

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-535911

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-535911" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-535911

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-535911

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-535911

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-535911

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-535911" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-535911" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-535911

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-535911" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-535911"

                                                
                                                
----------------------- debugLogs end: cilium-535911 [took: 3.875112748s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-535911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-535911
--- SKIP: TestNetworkPlugins/group/cilium (4.03s)

                                                
                                    
Copied to clipboard